WO2022218042A1 - 视频处理方法、装置、视频播放器、电子设备及可读介质 - Google Patents
视频处理方法、装置、视频播放器、电子设备及可读介质 Download PDFInfo
- Publication number
- WO2022218042A1 WO2022218042A1 PCT/CN2022/078141 CN2022078141W WO2022218042A1 WO 2022218042 A1 WO2022218042 A1 WO 2022218042A1 CN 2022078141 W CN2022078141 W CN 2022078141W WO 2022218042 A1 WO2022218042 A1 WO 2022218042A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- video
- video frame
- area
- optimized
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 169
- 238000000034 method Methods 0.000 claims abstract description 78
- 238000005457 optimization Methods 0.000 claims abstract description 68
- 230000008859 change Effects 0.000 claims description 44
- 230000008569 process Effects 0.000 claims description 33
- 230000015572 biosynthetic process Effects 0.000 claims description 29
- 238000003786 synthesis reaction Methods 0.000 claims description 29
- 230000003068 static effect Effects 0.000 claims description 12
- 238000010191 image analysis Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 21
- 239000000872 buffer Substances 0.000 description 18
- 238000009877 rendering Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 238000003780 insertion Methods 0.000 description 9
- 230000037431 insertion Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 241000287828 Gallus gallus Species 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 208000003028 Stuttering Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000009412 basement excavation Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- VJTAZCKMHINUKO-UHFFFAOYSA-M chloro(2-methoxyethyl)mercury Chemical compound [Cl-].COCC[Hg+] VJTAZCKMHINUKO-UHFFFAOYSA-M 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 210000003284 horn Anatomy 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000012559 user support system Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4545—Input to filtering algorithms, e.g. filtering a region of the image
- H04N21/45455—Input to filtering algorithms, e.g. filtering a region of the image applied to a region of the image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- the present application relates to the field of display technology, and more particularly, to a video processing method, apparatus, video player, electronic device, and readable medium.
- the present application proposes a video processing method, apparatus, video player, electronic device and readable medium to improve the above-mentioned defects.
- an embodiment of the present application provides a video processing method, which is applied to an electronic device, where the electronic device includes a screen and a plurality of image processing modules, the screen includes a plurality of display areas, and each of the display areas corresponds to At least one of the image processing modules, the method includes: acquiring the area to be optimized and the area not to be optimized in the first video frame of the target video; determining the designated display area of the screen corresponding to the area to be optimized; The designated image processing module corresponding to the designated display area performs image optimization processing on the first image data in the area to be optimized; The second image data obtains at least one image as a second video frame.
- an embodiment of the present application further provides a video processing apparatus, which is applied to an electronic device, where the electronic device includes a screen and a plurality of image processing modules, the screen includes a plurality of display areas, each of the display areas Corresponding to at least one of the image processing modules, the video processing apparatus includes: an acquisition unit, a determination unit, an optimization unit and a processing unit.
- the obtaining unit is configured to obtain the to-be-optimized area and the non-to-be-optimized area in the first video frame of the target video.
- a determination unit configured to determine a designated display area of the screen corresponding to the area to be optimized.
- An optimization unit configured to control a designated image processing module corresponding to the designated display area to perform image optimization processing on the first image data in the to-be-optimized area.
- a processing unit configured to obtain at least one image based on the first image data after the image optimization process has been performed and the second image data corresponding to the non-to-be-optimized area, as a second video frame.
- an embodiment of the present application further provides a video player, which is applied to an electronic device, where the electronic device includes a screen, the video player includes a data processor and a plurality of image processing modules, and the screen includes a plurality of Each of the display areas corresponds to at least one of the image processing modules, the data processor is connected to each of the image processing modules, and the video player is configured to execute the above method.
- an embodiment of the present application further provides an electronic device, including: a screen and the aforementioned video player, wherein the video player and the screen are connected in sequence.
- an embodiment of the present application further provides a computer-readable medium, where the readable storage medium stores program code executable by a processor, and when the program code is executed by the processor, the processor Perform the above method.
- FIG. 1 shows a block diagram of an image rendering architecture provided by an embodiment of the present application
- FIG. 2 shows a schematic diagram of two video frames provided by an embodiment of the present application
- FIG. 3 shows a frame insertion effect diagram provided by an embodiment of the present application
- FIG. 4 shows a module block diagram of an electronic device provided by an embodiment of the present application
- FIG. 5 shows a module block diagram of a video player provided by an embodiment of the present application
- FIG. 6 shows a schematic diagram of a connection relationship between a video player and a screen provided by an embodiment of the present application
- FIG. 7 shows a schematic diagram of multiple display areas of a screen provided by an embodiment of the present application.
- FIG. 8 shows a method flowchart of a video processing method provided by an embodiment of the present application.
- FIG. 9 shows a module block diagram of a video player provided by another embodiment of the present application.
- FIG. 10 shows a method flowchart of a video processing method provided by another embodiment of the present application.
- FIG. 11 shows a schematic diagram of an image change area and an image still area provided by an embodiment of the present application
- FIG. 12 shows a schematic diagram of a first video frame and a third video frame provided by an embodiment of the present application
- FIG. 13 shows a schematic diagram of a video details interface provided by an embodiment of the present application.
- FIG. 14 shows a schematic diagram of a video playback interface provided by an embodiment of the present application.
- FIG. 15 shows a block diagram of a video player provided by another embodiment of the present application.
- FIG. 16 shows a schematic diagram of an image change area provided by an embodiment of the present application.
- FIG. 17 shows a schematic diagram of an image change area provided by another embodiment of the present application.
- FIG. 18 shows a schematic diagram of a processing process of an image change area provided by an embodiment of the present application.
- FIG. 19 shows a schematic diagram of playback of a first video frame, a second video frame, and a third video frame provided by an embodiment of the present application
- FIG. 20 shows a block diagram of a module of a video processing apparatus provided by an embodiment of the present application.
- FIG. 21 shows a storage unit for storing or carrying a program code for implementing a video processing method according to an embodiment of the present application according to an embodiment of the present application.
- the video is often optimized, so as to improve the user's perception of the video.
- the optimization process can provide smoothness of video playback, clarity of pictures, and the like.
- the current video recording format is 24FPS/30FPS, that is, 24 frames per second, but the exposure time will be longer, generally more than 40ms, because this is the lowest limit that the human eye can accept, no matter how slow it is
- the human eye recognizes a coherent photo rather than a dynamic video. Because the frame rate of the video is too low, the slight stuttering of the screen affects the user's perception. When the user pauses the video, the moving objects in the video are blurred, the video playback fluency is low, and the user's perception of the video is poor. .
- the CPU obtains the video file to be played sent by the client, obtains the decoded video data after decoding, and sends the video data to the GPU.
- the GPU includes an image processing module, and the image The processing module may process the image data, for example, perform display enhancement processing, for example, increase the brightness, adjust the image contrast, etc. to achieve the effect of ultra-clear visual effects, and may also perform a resolution adjustment operation on the image.
- the rendering result is put into the frame buffer, and then the video controller will read the data in the frame buffer line by line according to the line synchronization (HSync) signal, and pass it to the display for display after digital-to-analog conversion.
- HSync line synchronization
- the above-mentioned image processing module may also be in the CPU, which is not limited herein.
- the terminal performs image optimization processing on the video when playing the video. For example, in order to achieve the smoothness of video playback and avoid blurring of the video playback screen, during video playback, frame insertion processing will be performed between multiple consecutive video frames.
- Motion estimation calculates the motion trajectory of objects in the picture, generates new frames for interpolation, and improves the smoothness of video playback.
- “Frames” means the number of frames per second (Frames Per Second, FPS). The more frames per clock, the smoother the displayed picture will be.
- FPS Fres Per Second
- the method of motion estimation may be determined by calculating the vector displacement of the layer between two consecutive frames of images.
- the motion trajectory of the object in the video frame may also be predicted based on the picture in the current frame.
- the first image 201 and the second image 202 shown in FIG. 2 are two consecutive frames of images in the video. It can be seen from the time axis that the first image 201 is the first image 201 before the second image 202 One frame of images, by analyzing the two frames of images, the moving objects in the first image 201 can be determined. It can be seen that in two consecutive frames of images, the circular pattern moves from top to bottom, and the triangle pattern moves from bottom to top. That is, the moving objects in the first image 201 are circular patterns and triangular patterns.
- the third image 203 after frame insertion processing is shown in FIG.
- the position of the circular pattern in the third image 203 is located at the position of the circular pattern in the first image 201 and the position of the circular pattern in the second image 202 .
- the position of the triangle pattern in the third image 203 is located between the position of the triangle pattern in the first image 201 and the position of the triangle pattern in the second image 202. Therefore, the moving object in the third image 203 can be seen.
- the action is located on the motion trajectory of the moving object in the first image 201 and the second image 202 , that is, the third image 203 can be regarded as a transition image between the first image 201 and the second image 202 .
- embodiments of the present application provide a video processing method, device and video player, which can determine an area in a video frame that needs to be optimized, and perform an optimization operation on the area, instead of performing an optimization operation on the entire image , which can reduce the power consumption of the terminal.
- the electronic device 100 includes a processor 110 , a screen 120 and a video player 200 .
- the processor 110 is connected to the video player 200
- the video player 200 is connected to the screen 120 .
- the electronic device 100 may be an electronic device capable of running an application program, such as a smart phone, a tablet computer, an electronic book, or the like.
- the electronic device 100 in the present application also includes a memory and one or more application programs, wherein the one or more application programs may be stored in the memory and configured to be executed by the one or more processors 110, the one or more programs
- the configuration is used to execute the methods described in the method embodiments of this application.
- the memory may include random access memory (Random Access Memory, RAM), or may include read-only memory (Read-Only Memory). Memory may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
- the memory may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.), Instructions and the like for implementing the various method embodiments described below.
- the storage data area may also store data (such as phone book, audio and video data, chat record data) created by the electronic device in use.
- the processor 110 is used for performing drawing operations.
- FPGA Programmable Logic Array
- Programmable Logic Array Programmable Logic Array, PLA
- the processor 110 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
- the CPU mainly handles the operating system, user interface and application programs, etc.
- the GPU is used for rendering and drawing of the display content
- the modem is used for processing wireless communication signals. It can be understood that, the above-mentioned modem may also not be integrated into the processor 110, and is implemented by a communication chip alone.
- the processor 110 may be a graphics processor, which is used for a drawing operation of a video.
- the drawing operation may include converting a vector image of the video into a bitmap based on the resolution of the screen, and then the converted The resolution of the bitmap is the same as the resolution of the screen, so that the area of each image within the video frame can correspond to the display area of the screen.
- the video player 200 is configured to perform image optimization processing on the video frames, and then send the optimized video frames to the screen 120 .
- the video player 200 is connected to the driving circuit 121 of the screen 120 .
- the video player 200 includes a data processor 210 and an image processing module 220.
- the image processing module 220 can perform image optimization processing on the image data that needs to be displayed in the corresponding display area.
- the image processing module 220 can be a DSP chip or a motion compensation (Motion Estimate and Motion Compensation, MEMC) chip, and the data processor 210 is used to analyze the area to be optimized in the image and control the image processing module 220 to perform graphics optimization operations .
- MEMC Motion Estimate and Motion Compensation
- the screen includes a drive circuit 121 and a pixel unit 30.
- the drive circuit 121 is connected to the pixel unit 30. As shown in FIG. 6, the drive circuit 121 is connected to the data line 301 of the screen 120, and the video player 200 sends the image data of the video frame to the drive.
- the circuit 121, the driving circuit 121 generates display data and sends the display data to each pixel unit 30 through the data line 301, so as to control the display content of each pixel unit 30, and then control the content displayed on the screen, and the gate line 302 controls each pixel unit 30 is lit, so that the display content can be displayed line by line.
- the screen includes a plurality of display areas 122, and each of the display areas 122 corresponds to at least one of the image processing modules.
- each image processing module is used to process at least one image processing module.
- Display content in the display area 122 .
- the corresponding relationship between the display area 122 and the image processing module may be a docking relationship of data processing, that is, the image processing module processes the display content in the corresponding display area, and the corresponding relationship may not be the corresponding relationship of the installation position, that is, The image processing module may not be installed at the position corresponding to the display area.
- the image processing module may also be installed at the position corresponding to the display area, which is not limited here.
- FIG. 8 shows a video processing method provided by an embodiment of the present application.
- the method is applied to the above-mentioned electronic device.
- the electronic device is provided with a video player, and the execution body of the method may be the video player.
- the execution body of the method may be the video player.
- the method includes: S801 to S804.
- S801 Acquire the area to be optimized and the area not to be optimized in the first video frame of the target video.
- the area to be optimized is an area that needs to perform the image optimization process in the embodiment of the present application
- the non-to-be-optimized area is an area that does not need to perform the image optimization process in the embodiment of the present application. It should be noted that, The non-to-be-optimized area does not mean that the optimization operation cannot be performed, but it is not necessary to perform the image optimization process of the present application, and optimization operations other than the image optimization process of the present application can still be performed.
- the first video frame of the target video may be a video frame currently to be played by the electronic device. Specifically, the electronic device has finished playing the previous frame of the first video frame, and is about to play the first video. frame.
- the first video frame of the target video may also be the current video frame to be processed. Specifically, the electronic device may pre-process multiple video frames, so as to avoid excessive processing time of the video frames. When the video is played, the video freezes.
- the image optimization processing in the embodiments of the present application or the video processing methods in the embodiments of the present application may be performed in an off-screen rendering buffer.
- an off-screen rendering buffer is set in the GPU in advance.
- the GPU will call the rendering client module to render and synthesize the multi-frame image data to be rendered and then send it to the display screen for display.
- the rendering client module Can be an OpenGL module.
- the final location of the OpenGL rendering pipeline is in the framebuffer.
- a framebuffer is a series of two-dimensional pixel storage arrays, including color buffers, depth buffers, stencil buffers, and accumulation buffers.
- OpenGL uses the framebuffer provided by the windowing system.
- OpenGL's GL_ARB_framebuffer_object extension provides a way to create additional Frame Buffer Objects (FBOs). Using the framebuffer object, OpenGL can redirect the framebuffer originally drawn to the window into the FBO.
- FBO Frame Buffer Object
- the video frame to be displayed needs to be put into the frame buffer (see Figure 1), and then the video controller will read the data in the frame buffer line by line according to the HSync signal, and pass it to the display after digital-to-analog conversion. Therefore, after the target video is acquired, multiple video frames of the target video are put into the off-screen rendering buffer, and at least image optimization processing is performed in the off-screen rendering buffer, and then, the video frames after the image optimization processing are Then put it into the frame buffer and wait for display. Therefore, when the screen refresh rate arrives, it can be guaranteed that the video frame has been processed and put into the frame buffer for display.
- the area to be optimized in the first video frame may be a target area corresponding to a specified type of target.
- all contour information in the first video frame is extracted through target extraction or a clustering algorithm. , and then find the category of the object corresponding to each contour line in the pre-learned model, where the category includes human body, animal, mountains, rivers, lakes, buildings, roads, etc.
- the outline and feature information of the target can be collected.
- the target object is a human body
- face feature extraction can be performed on the target object, wherein the face feature extraction method may include a knowledge-based characterization algorithm or a characterization method based on algebraic features or statistical learning.
- the target is a wide landscape such as a lake or continuous mountains, rivers, grasslands, etc., it can be judged whether the target has a long horizontal line, that is, there is a horizon, and if there is a horizon, it is judged as a broad landscape.
- the detection can collect all the horizontal lines through the contour extraction method, and then select the horizontal line fitted by a relatively concentrated number of horizontal lines as the horizon, so that the broad scenery can be detected.
- the target is a landscape according to the color. For example, when a relatively concentrated area of green or khaki is detected, it is determined to be a mountain or desert, and the target is determined to be a broad landscape.
- the detection of other objects such as rivers, buildings, and roads can also be performed by the above-mentioned detection algorithm, which will not be repeated here.
- the object belonging to the specified category is used as the target, then the position area of the target in the first video frame is used as the area to be optimized, and other areas are used as non-to-be-optimized areas. Optimize the area.
- the to-be-optimized area may also be an image change area, that is, the target object corresponding to the to-be-optimized area is a moving object, and the specific implementation of determining the image change area may refer to subsequent embodiments.
- S802 Determine a designated display area of the screen corresponding to the area to be optimized.
- the resolution of the target video is consistent with the display resolution of the screen, so that each image area in the video can correspond to the display area of the screen, that is, the first correspondence, then based on the first correspondence, Then, the display area of the screen corresponding to the to-be-optimized area of the first video frame can be determined as the designated display area.
- S803 Control the designated image processing module corresponding to the designated display area to perform image optimization processing on the first image data in the area to be optimized.
- the second correspondence between each display area of the screen and the image processing module may be preset, and the second correspondence includes the position information of each display area and the identifier of the corresponding image processing module, so as to find The image processing module corresponding to the designated display area in the second corresponding relationship is used as the designated image processing module.
- the image optimization process is used to improve the display effect of image data, including but not limited to improving display brightness and clarity, reducing picture blur, and increasing picture resolution.
- the image optimization process includes image parameter optimization of the image data, wherein the image parameter optimization includes at least one of exposure enhancement, denoising, edge sharpening, contrast increase or saturation increase.
- the exposure enhancement is used to improve the brightness of the image, and the brightness value can be increased in the area where the brightness value crosses the bottom through the histogram of the image.
- the brightness of the image can also be increased by nonlinear superposition, and the image data can be denoised.
- the image optimization process may be an image frame insertion process, and for specific implementation details, please refer to the following embodiments.
- S804 Obtain at least one image based on the first image data after the image optimization process has been performed and the second image data corresponding to the non-to-be-optimized area, as a second video frame.
- the second image data corresponding to the non-to-be-optimized area is data that has not undergone image optimization processing.
- the image data in the non-to-be-optimized area in the first video frame may be directly used as the second image. data, and splicing the first image data and the second image data after the image optimization process has been performed into a second video frame, and the size of the second video frame is the same as that of the first video frame.
- the image data in the non-optimized area in the first video frame can be obtained as the initial data, and the second video frame can be obtained after processing the initial data.
- the processing method can be a modification of the parameters of the initial data. , which is changed differently than image optimization is handled. For example, if the image optimization processing is image frame interpolation processing, the method of processing the initial data is a resolution adjustment operation, which is not specifically limited here.
- each image processing module sends image data to a video synthesis module, and the video synthesis module synthesizes the image data into a second video frame.
- the video player further includes a video synthesis module 230, a plurality of image processing modules 220 are connected to the video synthesis module 230, the video synthesis module 230 is connected to the driving circuit 121, and the data processor 210 controls the specified image processing
- the image processing module outside the module sends the second image data in the non-optimized area in the first video frame to the video synthesis module 230, and the video synthesis module 230 sends the specified image processing module to each
- the image data is synthesized into a second video frame, and the second video frame is sent to the driving circuit 121, and the driving circuit 121 drives the pixel units of the screen to display the second video.
- the process of displaying the second video frame is to play the first video frame and the second video frame in sequence.
- the data processor will send the display content of each image area to the corresponding image processing module 220 according to the display content of each area in the video frame, and each image processing module 220 will then send the image data.
- the video synthesis module 230 the final image data that needs to be displayed, that is, the video frame, is synthesized.
- each image processing module 220 acquires the image data of each image area of the video frame, it can determine whether to perform image optimization processing according to whether the image data belongs to the area to be optimized, and then, the image processing module 220 can The data is temporarily stored, and then sent to the video synthesis module 230 for synthesis. In some embodiments, after a certain video frame is played, the image data temporarily stored in each image processing module 220 may be cleared.
- the image processing module corresponding to the non-to-be-optimized area of the first video frame is controlled to retain image data, and when synthesizing the second video frame, the image processing module corresponding to the non-to-be-optimized area will retain the image data.
- the image data is sent to the video synthesis module 230 .
- the image processing module corresponding to the area to be optimized is named the first image processing module
- the image processing module corresponding to the non-to-be-optimized area is named the second image processing module.
- the optimization processing operation is not performed on the first video frame, and the optimization effect of the first video frame is represented by the second video frame.
- the first image data is sent to the first image processing module and temporarily stored
- the second image data is sent to the second image processing module and temporarily stored
- the first image processing module The image data is sent to the video synthesis module
- the second image processing module sends the second image data to the video synthesis module
- the video synthesis module synthesizes the first image data and the second image data for display.
- the second image processing module can be controlled to directly send the second image data to the video synthesis module, that is, the first image processing module can be controlled to directly send the second image data to the video synthesis module.
- the second image processing module continues to use the second image data of the first video frame, so as to avoid sending the second image data to the second image processing module again.
- the image processing module directly sends the image data when the previous video frame (ie, the first video frame) is displayed to the video synthesis module based on the hold instruction. Then, after completing the image optimization processing of the first image data, the first image processing module sends the first image data after the image optimization processing has been performed to the video synthesis module.
- the embodiment of the present application can control the image processing module for displaying the area to be optimized to perform image optimization processing, while the image processing module for displaying the non-to-be-optimized area may not perform image optimization processing.
- the video frame is processed for image optimization, which can reduce the power consumption of electronic equipment.
- the display area of the screen corresponds to a plurality of image processing modules, and the designated image processing module corresponding to the designated display area is controlled to perform image optimization processing on the first image data in the to-be-optimized area, compared to using a GPU or CPU of an electronic device To perform image optimization processing on the entire image area of the video frame, the power consumption of the electronic device can be reduced.
- FIG. 10 shows a video processing method provided by an embodiment of the present application.
- the method is applied to the above-mentioned electronic device.
- the electronic device is provided with a video player, and the execution body of the method may be a video player.
- the data processor in the device can also be a processor in the electronic device, for example, it can be a graphics processor of the electronic device, which is not limited here.
- the method includes: S1001 to S1004.
- S1001 Acquire an image change area and an image still area in a first video frame of a target video.
- the image change area and the image still area may be determined based on attribute information of objects in the first video.
- the attribute information may include a dynamic category
- the dynamic category may include a motion category and a static category. If the dynamic category of the object is a motion category, it indicates that the object belongs to a moving object, that is, in consecutive video frames, the object is in state of motion. If the dynamic category of the object is the stationary category, it indicates that the object is a stationary object, that is, in the continuous video frames, the object is in a stationary state.
- the static state may be that the motion range of the object is less than a specified range, and the motion range may be determined according to the displacement and the angle of the motion.
- the movement range of a tree is relatively small, and the tree can be considered to be in a static state, and in the case of a strong wind, the movement range of the tree is relatively large, and it can be considered that the tree is in a state of movement.
- the buildings marked by the solid box 1101 belong to the stationary class
- the vehicles and pedestrians marked by the dotted box 1102 belong to the moving class.
- the dynamic category of the object in the image can be recognized by the image recognition model.
- sample data can be obtained in advance, and the sample data includes a plurality of sample images, and the object in each sample image has a corresponding label.
- the labels include a first label and a second label. The first label is used to indicate that the dynamic class of the object is a motion class, and the second label is used to indicate that the dynamic class of the object is a static class. Through continuous learning, it is possible to identify the motion in the image. Objects and stationary objects.
- the image recognition model can identify moving vehicles and parked vehicles, for example, based on the position of the vehicle on the road and the traffic status of the road to determine whether the vehicle is a moving vehicle or stationary Vehicles, the image recognition model can also identify static pedestrians and dynamic pedestrians, for example, determine static pedestrians and dynamic pedestrians based on their posture and position.
- a moving object in the first video frame is determined, an image change area in the first video frame is determined based on a dynamic object, and an image static region in the first video frame is determined based on a static object, wherein all The image change area is the to-be-optimized area, and the image still area is the non-to-be-optimized area.
- the image change area and the image still area in the first video frame may also be determined according to consecutive frames. Specifically, determining a video frame adjacent to the first video frame in the target video as a third video frame; determining the video frame in the first video frame based on the first video frame and the third video frame the image change area, and the area outside the image change area in the first video frame is used as the image still area.
- the video frame adjacent to the first video frame may be a frame before the first video frame in the target video, or may be a frame after the first video frame in the target video.
- an implementation manner of determining the video frame adjacent to the first video frame in the target video as the third video frame may be to determine the next video frame in the target video. frame as the third video frame.
- the moving object in the first video frame is determined based on the vector displacement of the layer calculated between two consecutive frames of images, so that the moving object in the first video frame can be determined, that is, after the first video frame, the first video frame An object whose displacement or angle changes within a video frame.
- an object whose movement magnitude is greater than a specified magnitude may be regarded as a moving object.
- the first video frame 1201 is the previous video frame of the third video frame 1202.
- the current video frame to be played is the first video frame 1201
- the next video frame to be played is the first video frame 1201.
- the video frame is the third video frame 1202, and based on the first video frame 1201 and the third video frame 1202, it can be determined that in the first video frame, the moving object is a triangle pattern, and the stationary object is a circular pattern.
- the moving objects of the first video frame are determined, for example, the above-mentioned moving objects or objects whose motion amplitude is greater than the specified amplitude are used as the moving objects of the first video frame, and all moving objects of the first video frame are used as backup objects. Select an object and determine the specified object based on the reference information.
- the reference information is a user portrait
- the user portrait may include user basic tags, user interest preference tags, user equipment attributes and behavior tags, user application behavior tags, user social tags, and psychological value tags.
- the basic user tag corresponds to the user identity information, which refers to the basic demographic attribute tag of the user (including gender, age, location, etc.)
- the characteristic data corresponding to the tag is the user identity data
- the data acquisition methods include user reporting, algorithm excavation etc.
- the user interest preference tag corresponds to user interest information
- the user interest preference tag corresponds to the user's interest content, which can also be obtained by user reporting, algorithm mining, etc.
- the attribute information of the product used by the user corresponding to the attribute label of the user equipment, and the corresponding feature data is the configuration parameters of the product used by the user, such as memory capacity, battery capacity or screen size, etc., which can be obtained by user reporting or Collected through the SDK component in the user device.
- the user equipment behavior tag corresponds to the operation data of the user operating the mobile terminal
- the corresponding feature data is the data generated by the user operating the mobile terminal, and the acquisition method may be collected through the SDK component in the operating system of the mobile terminal.
- the user application behavior tag corresponds to the operation data of the user operating the application program installed in the mobile terminal
- the corresponding feature data is the data generated by the user operating the application program installed in the mobile terminal
- the acquisition method can be through the application program of the mobile terminal.
- the user's social tag corresponds to the user's social information, which can be obtained through the user's social data on various social networking sites or social APPs.
- the social data may include the user's number of friends, the number of comments, the number of likes and the number of followers content, etc.
- the psychological value label is the user's value data, which can be the user's character and right and wrong views, etc. Specifically, it can be determined by obtaining the content of the user's message on the social platform.
- the user's evaluation of a certain point of view can be Extract the keywords that the user supports or does not support the viewpoint, so as to determine the user's right and wrong viewpoints.
- the reference information may be a user interest preference label, and a designated object is selected from the candidate objects based on the user interest preference label, and the image area corresponding to the designated object in the first video frame is used as the image change area, Other image areas are used as image still areas.
- the specified object is an object of interest to the user, that is, the specified object matches the user's interest preference tag.
- the reference information may be an attribute of the user equipment, and a specified object is selected from the candidate objects based on the attribute of the user equipment. Specifically, for some moving objects, because the moving speed is too fast or the objects are relatively large, better hardware support is required when image optimization of the objects is performed.
- the user equipment attribute may include the computing capability of the processor of the terminal used by the user, and based on the computing capability, an object matching the computing capability is selected from the candidate objects as the designated object, wherein The computing capability is matched so that the processor with the computing capability can process the image data of the object and the processing speed is not less than the specified speed.
- the reference information may be a selected target pre-input by the user.
- the user may select a selected object in the designation interface, and then select the designated object among the candidate objects based on the selected object. Specifically, the object that matches the selected target object among the candidate objects may be used as the designated object.
- a touch gesture input by a user on a specified interface is acquired, and a selected target object corresponding to a target position in the specified interface is determined, wherein the target position is a position corresponding to the touch gesture.
- the specified interface may be an interface for displaying a specified image of the target video, and the specified image of the target video may be a thumbnail of the target video, and the specified interface may be a details interface of the target video, and the target video is displayed in the details interface.
- the thumbnail image of the video and the description information of the target video, the description information may include the summary information of the target video and the video character list, etc., wherein, the video character list includes the identification of at least some characters appearing in the target video, For example, it could be an actor of the target video.
- the video detail interface includes a video thumbnail 1301, and a plurality of characters are displayed in the video thumbnail 1301.
- the characters are characters that will appear in the target video, and the video details interface also includes video characters. 1302, as shown in Figure 13, displays 5 video characters.
- the identity identifier may be identity information such as the character's avatar or name.
- the video character list of the designated interface determine the identity mark corresponding to the target position, and use the person corresponding to the identity mark as the selected target object.
- the user can select an object in the video thumbnail 1301 as a To select a target, for example, when the video thumbnail 1301 is displayed on the screen, and the user touches a certain area on the video thumbnail 1301, the person corresponding to the area is used as the selected target.
- the specified interface may be a video playback interface, that is, the video frame of the target video, that is, the currently played video frame of the target video, is displayed in the video playback interface.
- target As shown in Figure 14, what is displayed on the screen is a picture in the target video. The user touches the "rooster" in the picture with his finger, and the electronic device detects that the screen is touched by the user, and then determines the area corresponding to the touch gesture input by the user.
- the target area in the corresponding image is the target area corresponding to the rooster
- the electronic device can choose to redisplay the screen, that is, after the video enhancement processing of the area corresponding to the rooster, redisplay the screen, or
- the next frame of image it is determined whether the moving object in the next frame of image includes a rooster, and if so, the image optimization process is performed on the rooster.
- S1002 Determine a designated display area of the screen corresponding to the image change area.
- S1003 Control the designated image processing module corresponding to the designated display area to perform image frame interpolation processing on the first image data in the to-be-optimized area.
- S1004 Obtain at least one image based on the first image data after the image optimization process has been performed and the second image data corresponding to the non-to-be-optimized area, as a second video frame.
- the video player includes: a decoding module 240, a video buffer 230, an image
- the video player may be regarded as a plug-in chip of the graphics processor 400 , that is, it does not belong to the chip of the graphics processor 400 .
- the client is used to provide the target video, that is, the client initiates a playback request of the target video
- the graphics processor 400 is used to perform a drawing operation, and the drawing operation may be to convert the video frame of the target video into a bitmap to obtain a layer of the video frame , for subsequent rendering and image optimization processing.
- the decoding module 240 is provided with a MIPI RX interface for receiving the first video frame and the third video frame input by the graphics processor 400, and the decoding module 240 decodes the first video frame to obtain the image data of the first video frame and the third video frame.
- the video buffer 230 buffers the image data of the first video frame and the third video frame.
- the image analysis module 211 determines the image change area and the image still area in the first video frame based on the image data of the first video frame and the third video frame.
- the dotted triangle and circle patterns represent the third video frame.
- the image change area can is the area between the first position of the moving object in the first video frame and the third position of the moving object in the third video frame.
- the control module 212 determines the designated display area of the screen corresponding to the image change area based on the position information of the image change area, and controls the designated image processing module to perform image optimization processing on the first image data and send it to the
- the video synthesis module is controlled, and the image processing modules other than the designated image processing module are controlled to send the second image data in the image still area in the first video frame to the video synthesis module.
- the display area corresponding to each image processing module is smaller than the image change area, and the designated image processing module is a plurality of image processing modules, and each image processing module is designated as multiple image processing modules.
- the area corresponding to the image data processed by the processing module is smaller, and the edges of large and small objects can be more accurately identified.
- the image change area corresponds to the change area of the moving object in the first video frame, but in fact, the movement change may be a partial position change or a partial area change of the moving object, for example, the moving object is a character , it is possible that the characters are only finger changes or eye changes.
- the processing module independently processes the image data in a small area, and the vector operation of the image data in the area is more accurate, and the edge transition and details of the object are more clear.
- the area 1701 shown by the dashed thick line in Figure 17 is the area that needs frame interpolation processing, then the image in this area 1701 can be sent to the image processing module corresponding to this area for vector operation.
- the frame interpolation process is performed on the entire image, the area 1701 will be easily identified due to the complex contour lines.
- S1005 Play the first video frame and the second video frame in sequence.
- the second video frame may be an image obtained based on the first image data after the image optimization process has been performed and the second image data corresponding to the non-to-be-optimized area, or may be obtained multiple images.
- the moving object in the plurality of images is determined based on the motion position or rotation angle of the moving object when the moving object is predicted based on the motion trajectory of the object in the first video frame. of. For example, if the moving object in the first video frame is a vehicle, and the driving direction of the vehicle is due north, in the multiple images determined at one time based on the driving direction, the position of the vehicle in each image is compared with the first video frame. The vehicles in the image are all farther north, and the positions of the vehicles in the multiple images are successively closer to the north.
- the embodiment of playing the first video frame and the second video frame in sequence is to play the first video frame, the second video frame and the third video frame in sequence.
- the first video frame 1201, the second video frame 1801 and the third video frame 1202 are played in sequence.
- the position of the triangle pattern is located at between the position of the triangular pattern in the first video frame 1201 and the position of the triangular pattern in the second video frame 1202, thus, through the interpolated frame playback, it is possible to reduce the time when the first video frame 1201 and the third video frame 1202 are played.
- the degree of blurring of the triangle pattern is located at between the position of the triangular pattern in the first video frame 1201 and the position of the triangular pattern in the second video frame 1202, thus, through the interpolated frame playback, it is possible to reduce the time when the first video frame 1201 and the third video frame 1202 are played.
- the degree of blurring of the triangle pattern is to reduce the time when the first video frame 1201 and the third video frame 1202 are played.
- the image data in the image processing module corresponding to the image still area keeps outputting the image data in the image still area in the first video frame 1201.
- the image data or the image data in the still area of the image in the third video frame 1202 is kept output.
- the positions of the circular patterns in the first video frame 1201, the second video frame 1801 and the third video frame 1202 do not change. Therefore, the image data in the image processing module corresponding to the area of the circular pattern keeps outputting the first video. Image data of the circular pattern within frame 1201 or image data of the circular pattern within the third video frame 1202 .
- the video player in this embodiment of the present application can be divided into N image processing modules, and each image processing module is responsible for vector motion calculation in different areas of the picture and output of new frame data, and no hardware interpolation is performed for the static picture part.
- Frame algorithm processing realizes low-power frame insertion technology, and at the same time, it is of great help to accurately identify the edges of large and small objects.
- refined frame insertion picture operations are performed to improve the overall frame insertion display effect.
- FIG. 20 shows a structural block diagram of a video processing apparatus 1900 provided by an embodiment of the present application.
- the apparatus is applied to an electronic device.
- the electronic device includes a screen and a plurality of image processing modules, and the screen includes a plurality of image processing modules.
- Each of the display areas corresponds to at least one of the image processing modules.
- the video processing apparatus 2000 may include: an acquisition unit 2001 , a determination unit 2002 , an optimization unit 2003 and a processing unit 2004 .
- the obtaining unit 2001 is configured to obtain the to-be-optimized area and the non-to-be-optimized area in the first video frame of the target video.
- the obtaining unit 2001 is further configured to obtain the image change area and the image still area in the first video frame of the target video, wherein the image change area is the to-be-optimized area, and the image still area is the non-image area. area to be optimized.
- the image optimization processing includes image frame interpolation processing.
- the obtaining unit 2001 is further configured to determine a video frame adjacent to the first video frame in the target video as a third video frame; determine the video frame based on the first video frame and the third video frame.
- the image change area in the first video frame is used, and the area outside the image change area in the first video frame is used as the image still area.
- the obtaining unit 2001 is further configured to determine the next frame of the first video frame in the target video as a third video frame.
- the determining unit 2002 is configured to determine a designated display area of the screen corresponding to the area to be optimized.
- the optimization unit 2003 is configured to control the designated image processing module corresponding to the designated display area to perform image optimization processing on the first image data in the area to be optimized.
- the processing unit 2004 is configured to obtain at least one image based on the first image data after the image optimization process has been performed and the second image data corresponding to the non-to-be-optimized area, as a second video frame.
- it also includes a display unit for playing the first video frame and the second video frame in sequence, specifically, for playing the first video frame, the second video frame and the first video frame in sequence.
- a display unit for playing the first video frame and the second video frame in sequence, specifically, for playing the first video frame, the second video frame and the first video frame in sequence.
- the coupling between the modules may be electrical, mechanical or other forms of coupling.
- each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules.
- FIG. 21 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer-readable medium 2100 stores program codes, and the program codes can be invoked by the processor to execute the methods described in the above method embodiments.
- the computer-readable storage medium 2100 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the computer-readable storage medium 2100 includes a non-transitory computer-readable storage medium.
- Computer readable storage medium 2100 has storage space for program code 2110 to perform any of the method steps in the above-described methods. These program codes can be read from or written to one or more computer program products.
- Program code 2110 may be compressed, for example, in a suitable form.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Image Processing (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
本申请公开了一种视频处理方法、装置、视频播放器、电子设备及可读介质,涉及显示技术领域,方法包括:获取目标视频的第一视频帧内的待优化区域和非待优化区域;确定所述待优化区域对应的所述屏幕的指定显示区域;控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理;基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。本申请能够控制用于显示该待优化区域的图像处理模块执行图像优化处理,而用于显示该非待优化区域的图像处理模块可以不执行图像优化处理,相比对整个第一视频帧进行图像优化处理,能够减少电子设备的功耗。
Description
相关申请的交叉引用
本申请要求于2021年4月14日提交中国专利局的申请号为202110401346.2、名称为“视频处理方法、装置、视频播放器、电子设备及可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及显示技术领域,更具体地,涉及一种视频处理方法、装置、视频播放器、电子设备及可读介质。
移动互联时代的到来,智能终端改变了很多人的生活方式及对传统通讯工具的需求,人们不再满足于终端的外观和基本功能的使用,而开始追求终端能够给人们带来更多、更强、更具个性化的功能服务。为了能更好的满足消费者对终端的体验,目前终端播放视频的时候,往往会对视频进行优化处理,以此提高用户对视频的观感,但是,这会增加终端的功耗。
发明内容
本申请提出了一种视频处理方法、装置、视频播放器、电子设备及可读介质,以改善上述缺陷。
第一方面,本申请实施例提供了一种视频处理方法,应用于电子设备,所述电子设备包括屏幕和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,所述方法包括:获取目标视频的第一视频帧内的待优化区域和非待优化区域;确定所述待优化区域对应的所述屏幕的指定显示区域;控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理;基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
第二方面,本申请实施例还提供了一种视频处理装置,应用于电子设备,所述电子设备包括屏幕和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,所述视频处理装置包括:获取单元、确定单元、优化单元和处理单元。获取单元,用于获取目标视频的第一视频帧内的待优化区域和非待优化区域。确定单元,用于确定所述待优化区域对应的所述屏幕的指定显示区域。优化单元,用于控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理。处理单元,用于基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
第三方面,本申请实施例还提供了一种视频播放器,应用于电子设备,所述电子设备包括屏幕,所述视频播放器包括数据处理器和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,所述数据处理器与每个所述图像处理模块连接;所述视频播放器用于执行上述方法。
第四方面,本申请实施例还提供了一种电子设备,包括:屏幕和前述视频播放器,所述视频播放器和所述屏幕依次连接。
第五方面,本申请实施例还提供了一种计算机可读介质,所述可读存储介质存储有处理器可执行的程序代码,所述程序代码被所述处理器执行时使所述处理器执行上述方法。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请实施例提供的图像渲染架构的框图;
图2示出了本申请实施例提供的两个视频帧的示意图;
图3示出了本申请实施例提供的插帧效果图;
图4示出了本申请实施例提供的电子设备的模块框图;
图5示出了本申请一实施例提供的视频播放器的模块框图;
图6示出了本申请实施例提供的视频播放器与屏幕之间的连接关系的示意图;
图7示出了本申请实施例提供的屏幕的多个显示区域的示意图;
图8示出了本申请一实施例提供的视频处理方法的方法流程图;
图9示出了本申请另一实施例提供的视频播放器的模块框图;
图10示出了本申请另一实施例提供的视频处理方法的方法流程图;
图11示出了本申请实施例提供的图像变化区域和图像静止区域的示意图;
图12示出了本申请实施例提供的第一视频帧和第三视频帧的示意图;
图13示出了本申请实施例提供的视频详情界面的示意图;
图14示出了本申请实施例提供的视频播放界面的示意图;
图15示出了本申请又一实施例提供的视频播放器的模块框图;
图16示出了本申请一实施例提供的图像变化区域的示意图;
图17示出了本申请另一实施例提供的图像变化区域的示意图;
图18示出了本申请实施例提供的图像变化区域的处理过程示意图;
图19示出了本申请实施例提供的第一视频帧、第二视频帧和第三视频帧的播放示意图;
图20示出了本申请实施例提供的视频处理装置的模块框图;
图21示出了本申请实施例的用于保存或者携带实现根据本申请实施例的视频处理方法的程序代码的存储单元。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
移动互联时代的到来,智能手机的流行已经成为手机市场的一大趋势。这类移动智能终端改变了很多人的生活方式及对传统通讯工具的需求,人们不再满足于手机的外观和基本功能的使用, 而开始追求手机能够给人们带来更丰富的个性化功能服务。如今,越来越多的消费者将购机目标定位在手机的娱乐、上网、即时通讯及服务等应用功能上,能更好的满足消费者对功能的极致体验也成了手机厂商的目标。
为了能更好的满足消费者对终端的体验,目前终端播放视频的时候,往往会对视频进行优化处理,以此提高用户对视频的观感。其中,优化处理可以提供视频播放的流畅性、画面的清晰度等。
例如,目前视频都是采用24FPS/30FPS的视频录制格式,也就是每秒24帧的画面,但是曝光的时间会比较久,一般在40ms以上,因为这是人眼能接受的最低极限,再慢人眼就会识别出是连贯的照片而不是一段动态视频。由于视频帧率过低,画面细微的卡顿感影响用户的观感,当用户暂停播放视频时,视频内的运动物体的画面较模糊,视频播放的流畅度较低,用户对视频的观感较差。
具体地,图像渲染的过程如图1所示,CPU获取客户端发送的待播放的视频文件,解码之后获取解码之后的视频数据,将视频数据发送至GPU,GPU内包括图像处理模块,该图像处理模块可以是对图像数据处理,例如,执行显示增强处理,例如,提高亮度、调整图像对比度等以实现超清视效的效果,还可以是对图像进行分辨率调整操作。然后,渲染完成后将渲染结果放入帧缓冲区,随后视频控制器会按照行同步(HSync)信号逐行读取帧缓冲区的数据,经过数模转换传递给显示器进行显示。另外,需要说明的是,上述的图像处理模块也可以是在CPU内,在此不做限定。
为了提高视频播放的效果,终端在播放视频的时候,会对视频执行图像优化处理。例如,为了实现视频播放的流畅性,避免视频播放的画面模糊,在视频播放的时候,连续的多个视频帧之间会执行插帧处理,插帧处理为通过检测视频当前的播放画面,进行运动估算,计算出画面中物体的运动轨迹,生成新的帧来进行插补,起到提高视频播放流畅度的效果,“帧”即每秒传输帧数(Frames Per Second,FPS),每秒钟帧数越多,所显示的播放画面就会越流畅,举例来说,上述“插帧”可以将30FPS的视频提高到60FPS,大大提高了用户的观看体验。其中,运动估算的方式可以是连续两帧图像之间计算图层的矢量位移来确定的,当然,也可以是基于当前帧内的画面预测该视频帧内的物体的运动轨迹。
如图2和3所示,图2所示的第一图像201和第二图像202为视频内的连续的两帧图像,通过时间轴可以看出,第一图像201为第二图像202的前一帧图像,通过分析该两帧图像,能够确定第一图像201内的运动物体,可以看出,在连续的两帧图像内,圆形图案由上向下移动,三角形图案由下向上移动,即第一图像201内的运动物体为圆形图案和三角形图案。插帧处理后的第三图像203如图3所示,可以看出,第三图像203内圆形图案的位置位于第一图像201内圆形图案的位置和第二图像202内圆形图案的位置之间,同理,第三图像203内三角形图案的位置位于第一图像201内三角形图案的位置和第二图像202内三角形图案的位置之间,因此,第三图像203的运动物体可以看作是位于第一图像201和第二图像202内的运动物体的运动轨迹上的,即第三图像203可以看作是第一图像201和第二图像202之间的过渡图像。
然而,发明人在研究中发现,目前在对视频帧执行图像优化处理的时候,往往是对整个视频帧做统一处理,例如,当需要提高视频的清晰度的时候,将整个视频的所有图像均提高清晰度,再例如,对视频帧执行插帧操作的时候,往往是基于整个视频帧做插帧处理,所以,在视频帧中物体矢量运动变化较小的部分和物体矢量运动变化较大的部分,都会统一的处理生成一整帧的新帧数据画面给到GPU进行绘图,即物体矢量运动变化较小的部分和物体矢量运动变化较大的部分都执行插帧操作,对几乎静止的和快速运动的物体全部都进行插帧处理,从而导致功耗增加较大。
为了克服上述缺陷,本申请实施例提供了一种视频处理方法、装置和视频播放器,能够确定视频帧内的需要优化的区域,对该区域执行优化操作,而并非是整个图像均执行优化操作,能够降低终端的功耗。
具体地,在介绍本申请实施例的视频处理方法之前,先介绍本申请的方法的应用环境。如图4所示,电子设备100包括处理器110、屏幕120和视频播放器200。处理器110与视频播放器200,视频播放器200与屏幕120连接。该电子设备100可以是智能手机、平板电脑、电子书等能够运行应用程序的电子设备。本申请中的电子设备100还包括存储器以及一个或多个应用程序,其中 一个或多个应用程序可以被存储在存储器中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行本申请方法实施例所描述的方法。存储器可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器可用于存储指令、程序、代码、代码集或指令集。存储器可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储电子设备在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
于本申请实施中,处理器110用于执行绘图操作,作为一种实施方式,该处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信信号。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
于本申请实施例中,处理器110可以是图形处理器,用于视频的绘图操作,具体地,该绘图操作可以包括基于屏幕的分辨率将视频的矢量图转为位图,则转换后的位图的分辨率与屏幕的分辨率相同,从而视频帧内的每个图像的区域都能够与屏幕的显示区域对应。
视频播放器200用于对视频帧执行图像优化处理,然后,将优化后的视频帧发送至屏幕120。视频播放器200与屏幕120的驱动电路121连接。如图5所示,视频播放器200包括数据处理器210和图像处理模块220,图像处理模块220能够对需要在所对应的显示区域内显示的图像数据执行图像优化处理,作为一种实施方式,该图像处理模块220可以是DSP芯片,也可以是运动补偿(Motion Estimate and Motion Compensation,MEMC)芯片,数据处理器210用于分析图像内的待优化的区域并且控制图像处理模块220执行图形优化操作。
屏幕包括驱动电路121和像素单元30,驱动电路121与像素单元30连接,如图6所示,驱动电路121与屏幕120的数据线301连接,视频播放器200将视频帧的图像数据发送至驱动电路121,驱动电路121产生显示数据并将显示数据通过数据线301发送至各个像素单元30,从而能够控制各个像素单元30的显示内容,进而控制屏幕所显示的内容,栅线302控制各个像素单元30被点亮,从而能够逐行将显示内容显示。
于本申请实施例中,如图7所示,屏幕包括多个显示区域122,每个所述显示区域122对应至少一个所述图像处理模块,具体地,每个图像处理模块用于处理至少一个显示区域122内的显示内容。具体地,请参阅后续实施例。则显示区域122与图像处理模块之间的对应关系可以是数据处理的一种对接关系,即图像处理模块处理所对应的显示区域内的显示内容,该对应关系可以不是安装位置的对应关系,即图像处理模块可以不安装在显示区域对应的位置处,当然,也可以是,图像处理模块安装在显示区域对应的位置处,在此不做限定
请参阅图8,图8示出了本申请实施例提供的一种视频处理方法,该方法应用于上述电子设备,该电子设备内设置有视频播放器,该方法的执行主体可以是视频播放器,也可以是电子设备内的处理器,例如,可以是电子设备的图形处理器,在此不做限定。具体地,该方法包括:S801至S804。
S801:获取目标视频的第一视频帧内的待优化区域和非待优化区域。
作为一种实施方式,该待优化区域为需要执行本申请实施例中的图像优化处理的区域,非待优化区域为不需要执行本申请实施例中的图像优化处理的区域,需要说明的是,非待优化区域并非是不能够执行优化操作,而是不需要执行本申请的图像优化处理,而依然可以执行本申请的图像优化处理之外的优化操作。
作为一种实施方式,该目标视频的第一视频帧可以是电子设备当前待播放的视频帧,具体地,电子设备已经完成该第一视频帧的前一帧的播放,即将播放该第一视频帧。作为另一种实施方式,该目标视频的第一视频帧也可以是当前待处理的视频帧,具体地,电子设备可以预先处理多个视 频帧,从而能够避免由于视频帧的处理时长过高而导致视频播放的时候,产生视频卡顿。在一些实施例中,本申请实施例中的图像优化处理或者本申请实施例的视频处理方法可以在离屏渲染缓冲区内执行。
具体地,预先在GPU内设置一个离屏渲染缓冲区,具体地,GPU会调用渲染客户端模块对待渲染的多帧图像数据渲染合成之后发送至显示屏上显示,具体地,该渲染客户端模块可以是OpenGL模块。OpenGL渲染管线的最终位置是在帧缓冲区中。帧缓冲区是一系列二维的像素存储数组,包括了颜色缓冲区、深度缓冲区、模板缓冲区以及累积缓冲区。默认情况下OpenGL使用的是窗口系统提供的帧缓冲区。
OpenGL的GL_ARB_framebuffer_object这个扩展提供了一种方式来创建额外的帧缓冲区对象(Frame Buffer Object,FBO)。使用帧缓冲区对象,OpenGL可以将原先绘制到窗口提供的帧缓冲区重定向到FBO之中。
而需要显示的视频帧需要放入帧缓冲区(请参见图1),随后视频控制器会按照HSync信号逐行读取帧缓冲区的数据,经过数模转换传递给显示器显示。因此,在获取到目标视频之后,就将目标视频的多个视频帧放入离屏渲染缓冲区,并且在离屏渲染缓冲区内至少执行图像优化处理,然后,经过图像优化处理之后的视频帧再放入帧缓冲区等待显示,因此,在屏幕刷新频率到来时,能够保证视频帧已经被处理完毕并且被放入帧缓冲区等待显示。
作为一种实施方式,第一视频帧内的待优化区域可以是指定类型的目标物对应的目标物区域,具体地,通过目标提取或者聚类算法提取出第一视频帧内的所有轮廓线信息,然后再在预先学习的模型中查找到每个轮廓线对应的物体的类别,其中,该类别包括人体、动物、山川、河流、湖面、建筑物、道路等。
例如,当目标物是动物时,可以通过采集目标物的轮廓以及特征信息,例如,耳朵、犄角、耳朵及四肢。当目标物是人体时,可以通过对目标物进行人脸特征提取,其中,人脸特征提取的方法可以包括基于知识的表征算法或者基于代数特征或统计学习的表征方法。另外,但目标物是湖面或者连绵的山川、草原等宽广的风景的时候,可以判断该目标物是否存在较长的横线,即存在地平线,如果存在地平线则判定为宽广的风景,其中,地平线的检测可以通过轮廓提取方法采集所有的横线条,然后选择比较集中的多个横线条拟合的横线作为地平线,由此就可以检测到宽广的风景。当然也可以根据颜色来确定目标物是否为风景,例如,当检测到比较集中的一片区域的绿色或者土黄色时,判定为山川或者沙漠,则判定该目标物为宽广的风景。同理,河流、建筑物、道路等其他的物体的检测也可以通过上述的检测算法,在此不再赘述。
在识别到第一视频帧内的每个物体的类别之后,将属于指定类别的物体作为目标物,则该目标物在第一视频帧内的位置区域作为待优化区域,其他的区域作为非待优化区域。
作为另一种实施方式,该待优化区域还可以是图像变化区域,即待优化区域对应的目标物为运动物体,则具体确定图像变化区域的实施方式可以参考后续实施例。
S802:确定所述待优化区域对应的所述屏幕的指定显示区域。
作为一种实施方式,目标视频的分辨率与屏幕的显示分辨率一致,从而视频内的每个图像区域都能够与屏幕的显示区域对应,即第一对应关系,则基于该第一对应关系,就能够确定第一视频帧的待优化区域对应的屏幕的显示区域,作为指定显示区域。
S803:控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理。
作为一种实施方式,可以预先设置屏幕的各个显示区域与图像处理模块的第二对应关系,该第二对应关系内包括每个显示区域的位置信息和所对应的图像处理模块的标识,从而查找该指定显示区域在第二对应关系内对应的图像处理模块,作为指定图像处理模块。
于本申请实施例中,图像优化处理用于将图像数据的显示效果提升,包括但不限于提升显示亮度、清晰度、降低画面模糊、提高画面分辨率等。具体地,图像优化处理包括对图像数据的图像参数优化,其中,所述图像参数优化包括曝光度增强、去噪、边缘锐化、对比度增加或饱和度增加的至少一种。其中,曝光度增强,用于提高图像的亮度,则可以通过图像的直方图,将亮度 值交底的区域增加亮度值,另外,也可以是通过非线性叠加,增加图像亮度,对图像数据去噪用于去除图像的噪声,边缘锐化用于使模糊的图像变得更加清晰起来,对比度增加用于增强图像的画质,使得图像内的颜色更加鲜明。作为另一种实施方式,图像优化处理可以是图像插帧处理,具体实施方式请参考后续实施例。
S804:基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
其中,所述非待优化区域对应的第二图像数据是未经过图像优化处理的数据,作为一种实施方式,可以直接将第一视频帧内的非待优化区域内的图像数据作为第二图像数据,将已执行所述图像优化处理后的第一图像数据和第二图像数据拼接成第二视频帧,且第二视频帧与第一视频帧的尺寸一致。作为另一种实施方式,可以获取第一视频帧内的非待优化区域内的图像数据作为初始数据,对初始数据处理之后得到第二视频帧,该处理方式可以是对初始数据的参数的更改,其更改方式与图像优化处理方式不同。例如,图像优化处理为图像插帧处理,则对初始数据处理的方式为分辨率调整操作,具体地,在此不作限定。
作为一种实施方式,各个图像处理模块会将图像数据发送至视频合成模块,由视频合成模块将图像数据合成为第二视频帧。如图9所示,视频播放器还包括视频合成模块230,多个图像处理模块220均与视频合成模块230连接,视频合成模块230与驱动电路121连接,数据处理器210控制所述指定图像处理模块之外的图像处理模块将所述第一视频帧内所述非待优化区域内的第二图像数据发送至所述视频合成模块230,视频合成模块230对每个所述指定图像处理模块发送的图像数据合成为第二视频帧,并将第二视频帧发送至驱动电路121,驱动电路121驱动屏幕的像素单元显示该第二视频。
作为一种实施方式,在获取到第二视频帧之后,将第二视频帧的显示的过程为,依次播放第一视频帧和第二视频帧。具体地,视频帧显示的时候,数据处理器会根据视频帧内的各个区域的显示内容,将各个图像区域的显示内容发送至对应的图像处理模块220,各个图像处理模块220再将图像数据发送至视频合成模块230合成得到最终需要显示的图像数据,即视频帧。具体地,每个图像处理模块220在获取到视频帧的各个图像区域的图像数据之后,可以依据该图像数据是否属于待优化区域来确定是否执行图像优化处理,然后,图像处理模块220可以将图像数据暂存,然后,发送至视频合成模块230合成。在一些实施例中,某个视频帧播放完毕之后,可以将各个图像处理模块220暂存的图像数据清除。
于本申请实施例中,控制第一视频帧的非待优化区域对应的图像处理模块保留图像数据,并内且在合成第二视频帧的时候,非待优化区域对应的图像处理模块将保留的图像数据发送至视频合成模块230。以第一视频帧和第二视频帧为例,假设待优化区域对应的图像处理模块命名为第一图像处理模块,非待优化区域对应的图像处理模块命名为第二图像处理模块,假设在显示第一视频帧的时候,未对第一视频帧执行优化处理操作,第一视频帧的优化效果是由第二视频帧来表现的。则在显示第一视频帧的时候,第一图像数据被发送至第一图像处理模块并暂存,第二图像数据被发送至第二图像处理模块并暂存,第一图像处理模块将第一图像数据发送至视频合成模块,第二图像处理模块将第二图像数据发送至视频合成模块,视频合成模块将第一图像数据和第二图像数据合成后显示。
而在显示第二视频帧的时候,第一图像数据需要被优化而第二图像数据不需要被优化,所以,可以控制第二图像处理模块直接将第二图像数据发送至视频合成模块,即第二图像处理模块继续使用第一视频帧的第二图像数据,从而能够避免再次将第二图像数据发送至第二图像处理模块,具体地,可以发送一个保持指令至第二图像处理模块,第二图像处理模块基于保持指令将在显示前一视频帧(即第一视频帧)时的图像数据直接发送至视频合成模块。然后,第一图像处理模块在完成第一图像数据的图像优化处理之后,将已执行所述图像优化处理后的第一图像数据发送至视频合成模块。
因此,本申请实施例能够控制用于显示该待优化区域的图像处理模块执行图像优化处理,而用于显示该非待优化区域的图像处理模块可以不执行图像优化处理,相比对整个第一视频帧进行图像优化处理,能够减少电子设备的功耗。另外,屏幕的显示区域对应多个图像处理模块,并且控制指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处 理,相比使用电子设备的GPU或CPU来对视频帧的整个图像区域执行图像优化处理,能够降低电子设备的功耗。
请参阅图10,图10示出了本申请实施例提供的一种视频处理方法,该方法应用于上述电子设备,该电子设备内设置有视频播放器,该方法的执行主体可以是视频播放器内的数据处理器,也可以是电子设备内的处理器,例如,可以是电子设备的图形处理器,在此不做限定。具体地,该方法包括:S1001至S1004。
S1001:获取目标视频的第一视频帧内的图像变化区域和图像静止区域。
作为一种实施方式,可以基于第一视频内的物体的属性信息来确定图像变化区域和图像静止区域。其中,属性信息可以包括动态类别,该动态类别可以包括运动类和静止类,则如果物体的动态类别为运动类,则表明该物体属于运动的物体,即在连续的视频帧中,该物体处于运动状态。如果物体的动态类别为静止类,则表明该物体属于静止的物体,即在连续的视频帧中,该物体处于静止状态。另外,需要说明的是,该静止状态可以是物体的运动幅度小于指定幅度,该运动幅度可以根据运动的位移和角度来确定。例如,树木在微风的情况下,运动幅度比较小,可以认为树木处于静止状态,树木在强风的情况下,运动幅度比较大,可以认为树木处于运动状态。
如图11所示,实线框1101标记的建筑物属于静止类,虚线框1102标记的车辆和行人属于运动类。作为一种实施方式,可以通过图像识别模型识别图像内的物体的动态类别,具体地,可以预先获取样本数据,该样本数据包括多个样本图像且每个样本图像内的物体对应有标签,该标签包括第一标签和第二标签,第一标签用于表示物体的动态类别为运动类,第二标签用于表示物体的动态类别为静止类,通过不断地学习,能够识别出图像内运动的物体和静止的物体,如图11所示的图像内,图像识别模型可以识别出运动的车辆以及停靠的车辆,例如,根据车辆在道路上的位置以及道路的交通状态确定车辆是运动车辆还是静止车辆,该图像识别模型还可以识别出静态的行人和动态的行人,例如,根据行人的姿势和位置确定静态的行人和动态的行人。
然后,确定出第一视频帧内的运动类的物体,基于动态类别的物体确定第一视频帧内的图像变化区域,基于静止类的物体确定第一视频帧内的图像静止区域,其中,所述图像变化区域为所述待优化区域,所述图像静止区域为所述非待优化区域。
作为另一种实施方式,还可以根据连续帧来确定第一视频帧内的图像变化区域和图像静止区域。具体地,确定所述目标视频内与所述第一视频帧相邻的视频帧,作为第三视频帧;基于所述第一视频帧和所述第三视频帧确定所述第一视频帧内的图像变化区域,所述第一视频帧内的所述图像变化区域之外的区域作为所述图像静止区域。其中,与所述第一视频帧相邻的视频帧可以是目标视频内所述第一视频帧的前一帧,也可以是目标视频内所述第一视频帧的后一帧。于本申请实施例中,确定所述目标视频内与所述第一视频帧相邻的视频帧作为第三视频帧的实施方式可以是,确定所述目标视频内所述第一视频帧下一帧作为第三视频帧。
具体地,基于连续两帧图像之间计算的图层的矢量位移来确定的第一视频帧内的运动物体,从而能够确定第一视频帧内的运动物体,即在第一视频帧之后,第一视频帧内的位移或角度会发生变化的物体。作为一种实施方式,可以将第一视频帧内的所有运动物体中运动幅度大于指定幅度的物体作为运动物体。如图12所示,第一视频帧1201为第三视频帧1202的前一视频帧,在目标视频的视频播放顺序中,当前待播放的视频帧为第一视频帧1201,下一个要播放的视频帧为第三视频帧1202,基于第一视频帧1201和第三视频帧1202可以确定在第一视频帧中,运动物体为三角形图案,静止物体为圆形图案。
另外,在确定了第一视频帧的运动物体之后,例如,上述的运动类的物体或运动幅度大于指定幅度的物体作为第一视频帧的运动物体,将第一视频帧的所有运动物体作为备选物体,基于参考信息来确定指定物体。
作为一种实施方式,该参考信息为用户画像,该用户画像可以包括用户基础标签、用户兴趣偏好标签、用户设备属性及行为标签、用户应用行为标签、用户社交标签和心理价值观标签等。其中,用户基础标签对应用户身份信息,指的是用户基础人口属性标签(包括性别年龄、所在区域等),该标签对应的特征数据为用户身份数据,则该数据的获取方式包括用户上报、算法挖掘等。用户兴趣偏好标签对应用户兴趣信息,用户兴趣偏好标签对应用户的兴趣内容,其获取方式也可 以是用户上报、算法挖掘等。用户设备属性标签对应的用户所使用的产品的属性信息,其对应的特征数据为用户所使用的产品的配置参数,例如,内存容量、电池容量或屏幕尺寸等,其获取方式可以是用户上报或者通过用户设备内的SDK组件采集。用户设备行为标签对应用户操作移动终端的操作数据,所对应的特征数据为用户操作移动终端所产生的数据,其获取方式可以是通过移动终端的操作系统内的SDK组件收集。用户应用行为标签对应用户操作安装在移动终端内的应用程序的操作数据,所对应的特征数据为用户操作移动终端内安装的应用程序所产生的数据,其获取方式可以是通过移动终端的应用程序内的SDK组件收集。用户社交标签对应用户的社交信息,可以是通过用户在各个社交网站或者社交APP的社交数据而获得,该社交数据可以包括用户的好友数量、被评论的数量、被点赞的数量以及所关注的内容等。心理价值观标签为用户的价值观数据,该价值观数据可以是用户的性格和是非观等,具体地,可以通过获取用户在社交平台上的留言内容而确定,例如,用户对某个观点的评价,能够提取出用户对该观点支持还是不支持的关键词,从而确定用户的是非观。
作为一种实施方式,该参考信息可以是用户兴趣偏好标签,基于该用户兴趣偏好标签由备选物体中选出指定物体,将第一视频帧内该指定物体对应的图像区域作为图像变化区域,其他的图像区域作为图像静止区域。其中,该指定物体为用户感兴趣的物体,即该指定物体与用户兴趣偏好标签匹配。作为又一种实施方式,该参考信息可以是用户设备属性,基于该用户设备属性由备选物体中选出指定物体。具体地,对于一些运动物体由于运动速度过快或者物体比较庞大,对该物体进行图像优化的时候,需要较好的硬件支持。在一些实施例中,该用户设备属性可以包括用户所使用的终端的处理器的运算能力,基于该运算能力由备选物体中选定与该运算能力匹配的物体作为指定物体,其中,与该运算能力匹配为具有该运算能力的处理器能够处理该物体的图像数据并且处理速度不小于指定速度。
作为另一种实施方式,该参考信息可以是用户预先输入的选定目标物。在一些实施例中,用户可以在指定界面内选中一个选定目标物,然后基于该选定目标物在备选物体选定指定物体。具体地,可以是将备选物体中与选定目标物匹配的物体作为指定物体。
在一些实施例中,获取用户在指定界面上输入的触摸手势,确定所述指定界面内目标位置对应的选定目标物,其中,所述目标位置为所述触摸手势对应的位置。其中,该指定界面可以是显示目标视频的指定图像的界面,该目标视频的指定图像可以是目标视频的缩略图,则该指定界面可以是目标视频的详情界面,在该详情界面内显示有目标视频的缩略图以及该目标视频的描述信息,该描述信息可以包括该目标视频的摘要信息以及视频人物列表等,其中,该视频人物列表内包括在目标视频内出现的至少部分人物的身份标识,例如,可以是目标视频的演员。如图13所示,该视屏详情界面内包括视频缩略图1301,该视频缩略图1301内显示有多个人物,该人物为在目标视频内会出现的人物,该视屏详情界面内还包括视频人物1302,如图13所示,显示有5个视频人物。作为一种实施方式,该身份标识可以是该人物的头像或姓名等身份信息。
在所述指定界面的视频人物列表内,确定所述目标位置对应的身份标识,将所述身份标识对应的人物作为选定目标物,具体地,用户可以在视频缩略图1301选中一个物体,作为选定目标物,例如,当屏幕显示该视频缩略图1301的时候,用户在该视频缩略图1301触摸某个区域,则该区域对应的人物作为选定目标物。另外,还可以是在屏幕所显示的多个视频人物中选中一个视频人物作为选定目标物。
在另一些实施中,该指定界面可以是视频播放界面,即在视频播放界面内显示目标视频的视频帧,即目标视频当前播放的视频帧,然后,用户在该视频播放界面的图像内选中选定目标物。如图14所示,屏幕上所显示的为目标视频中的一个画面,用户用手指触摸画面中的“公鸡”,则电子设备检测到屏幕被用户触摸,则确定用户输入的触摸手势对应的区域所对应的图像内的目标物区域,即为该公鸡对应的目标物区域,则电子设备可以选择将画面重新显示,即将公鸡对应的区域视频增强处理之后,再重新显示该画面,也可以是在播放下一帧图像时,确定下一帧图像的运动物体中是否包括公鸡,如果包括,则将该公鸡做图像优化处理。
S1002:确定所述图像变化区域对应的所述屏幕的指定显示区域。
S1003:控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像插帧处理。
S1004:基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
下面结合本申请实施例中的视频播放器的硬件图描述本申请的图像插帧过程,具体地,如图15所示,视频播放器包括:依次连接的解码模块240、视频缓存器230、图像分析模块211、控制模块212、图像处理模块220和视频合成模块230以及编码模块250,解码模块240与电子设备的图形处理器400连接,编码模块250与屏幕的驱动电路121。其中,视频播放器可以看作是图形处理器400的外挂芯片,即并非属于图形处理器400的芯片。
客户端用于提供目标视频,即客户端发起目标视频的播放请求,图形处理器400用于执行绘图操作,该绘图操作可以是将目标视频的视频帧转换为位图从而得到视频帧的图层,以便后续渲染和图像优化处理等操作。解码模块240设置有MIPI RX接口,用于接收图形处理器400输入的第一视频帧和第三视频帧,解码模块240将第一视频帧解码得到第一视频帧和第三视频帧的图像数据。视频缓存器230将第一视频帧和第三视频帧的图像数据缓存。
图像分析模块211基于第一视频帧和第三视频帧的图像数据确定第一视频帧内的图像变化区域和图像静止区域,如图16所示,虚线的三角形和圆形图案表示第三视频帧内的三角形和圆形,将第一视频帧和第二视频帧的图像放在一起可以看出,圆形图案的位置变化很小,即运动幅度小于指定幅度,则可以认为该圆形图案处于静止状态,三角形图案的位置变化比较大,该三角形图案处于运动状态,所确定的图像变化区域1601为图16中虚线矩形框所框选的区域,于本申请实施例中,该图像变化区域可以是第一视频帧内的运动物体的第一位置与第三视频帧内的运动物体的第三位置之间的区域。
控制模块212基于所述图像变化区域的位置信息确定所述图像变化区域对应的所述屏幕的指定显示区域,控制所述指定图像处理模块对所述第一图像数据执行图像优化处理并发送至所述视频合成模块,并控制所述指定图像处理模块之外的图像处理模块将所述第一视频帧内所述图像静止区域内的第二图像数据发送至所述视频合成模块。
作为一种实施方式,在对图像变化区域进行图像优化的时候,每个图像处理模块所对应的显示区域相比图像变化区域更小,且指定图像处理模块为多个图像处理模块,每个图像处理模块处理的图像数据对应的区域更小,能够更加准确识别大小物体边缘。具体地,图像变化区域所对应的是第一视频帧内的运动物体的变化区域,而实际上,运动的变化可能是运动物体的部分位置的变化或者部分区域的变化,例如,运动物体是人物,可能人物只是手指变化或眼睛变化,因此,由于每个图像处理模块所对应的显示区域相比图像变化区域更小,则在一些轮廓线比较密集或者图像内容比较丰富的区域,使用某个图像处理模块单独对某个小区域内的图像数据进行处理,对该区域内的图像数据的矢量运算更加精确,物体边缘过渡和细节更加的清晰。如图17和18所示,图17中的虚粗线框所示的区域1701为需要插帧处理的区域,则可以将该区域1701内的图像发送至该区域对应的图像处理模块进行矢量运算且插帧处理,而如果将整个图像作为执行插帧处理,则该区域1701由于轮廓线比较复杂,很容易导致识别不准确。
S1005:依次播放所述第一视频帧、所述第二视频帧。
作为一种实施方式,该第二视频帧可以是基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到一张图像,也可以是得到的多张图像。若第二视频帧为多张图像,则该多张图像内的运动物体是基于第一视频帧内的物体的运动轨迹预测的该运动物体基于该运动轨迹运动时的运动位置或转动角度而确定的。例如,第一视频帧内的运动物体是车辆,该车辆的行驶方向为正北方向,则基于该行使方向一次确定的多张图像中,每个图像内的车辆的位置相比第一视频帧内的车辆均更加靠北,并且多张图像内的车辆的位置依次更靠近北方。
作为一种实施方式,该依次播放所述第一视频帧、所述第二视频帧的实施方式为,依次播放所述第一视频帧、所述第二视频帧和所述第三视频帧。如图19所示,播放的时候,依次播放第一视频帧1201、第二视频帧1801和第三视频帧1202,由图19可以看出,第二视频帧1801中,三角形图案的位置,位于第一视频帧1201内的三角形图案的位置以及第二视频帧1202内的三角形图案的位置之间,由此,通过插帧播放,能够降低播放第一视频帧1201和第三视频帧1202时,该三角形图案的画面模糊程度。
另外,在播放第一视频帧1201、第二视频帧1801和第三视频帧1202内时,图像静止区域对应的图像处理模块内的图像数据保持输出第一视频帧1201内的图像静止区域内的图像数据或者保持输出第三视频帧1202内的图像静止区域内的图像数据。例如,第一视频帧1201、第二视频帧1801和第三视频帧1202内的圆形图案的位置不变,因此,圆形图案的区域对应的图像处理模块内的图像数据保持输出第一视频帧1201内的圆形图案的图像数据或者第三视频帧1202内的圆形图案的图像数据。
综上所述,本申请实施例视频播放器可以分为N个图像处理模块,每个图像处理模块负责画面的不同区域的矢量运动计算和新帧数据的输出,针对静态画面部分不进行硬件插帧算法处理,实现低功耗的插帧技术,同时对准确识别大小物体边缘有较大的帮助,通过每个硬件小模块进行精细化的插帧画面运算,提升总体的插帧显示效果。
请参阅图20,其示出了本申请实施例提供的一种视频处理装置1900的结构框图,该装置应用于电子设备,所述电子设备包括屏幕和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,具体地,视频处理装置2000可以包括:获取单元2001、确定单元2002、优化单元2003和处理单元2004。
获取单元2001,用于获取目标视频的第一视频帧内的待优化区域和非待优化区域。
进一步的,获取单元2001还用于获取目标视频的第一视频帧内的图像变化区域和图像静止区域,其中,所述图像变化区域为所述待优化区域,所述图像静止区域为所述非待优化区域。其中,所述图像优化处理包括图像插帧处理。
进一步的,获取单元2001还用于确定所述目标视频内与所述第一视频帧相邻的视频帧,作为第三视频帧;基于所述第一视频帧和所述第三视频帧确定所述第一视频帧内的图像变化区域,所述第一视频帧内的所述图像变化区域之外的区域作为所述图像静止区域。
进一步的,获取单元2001还用于确定所述目标视频内所述第一视频帧下一帧作为第三视频帧。
确定单元2002,用于确定所述待优化区域对应的所述屏幕的指定显示区域。
优化单元2003,用于控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理。
处理单元2004,用于基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
进一步的,还包括显示单元,用于依次播放所述第一视频帧、所述第二视频帧,具体地,用于依次播放所述第一视频帧、所述第二视频帧和所述第三视频帧。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
请参考图21,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质2100中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质2100可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质2100包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质2100具有执行上述方法中的任何方法步骤的程序代码2110的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码2110可以例如以适当 形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
Claims (20)
- 一种视频处理方法,其特征在于,应用于电子设备,所述电子设备包括屏幕和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,所述方法包括:获取目标视频的第一视频帧内的待优化区域和非待优化区域;确定所述待优化区域对应的所述屏幕的指定显示区域;控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理;基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
- 根据权利要求1所述的方法,其特征在于,获取目标视频的第一视频帧内的待优化区域和非待优化区域,包括:获取目标视频的第一视频帧内的图像变化区域和图像静止区域,其中,所述图像变化区域为所述待优化区域,所述图像静止区域为所述非待优化区域。
- 根据权利要求2所述的方法,其特征在于,所述图像优化处理包括图像插帧处理。
- 根据权利要求2或3所述的方法,其特征在于,所述获取目标视频的第一视频帧内的图像变化区域和图像静止区域,包括:基于图像识别模型获取所述目标视频的第一视频帧内的物体的动态类别,所述动态类别包括运动类和静止类;基于所述动态类别确定所述目标视频的第一视频帧内的图像变化区域和图像静止区域。
- 根据权利要求4所述的方法,其特征在于,所述基于所述动态类别确定所述目标视频的第一视频帧内的物体图像变化区域和图像静止区域,包括:将所述运动类对应的所述目标视频的第一视频帧内的物体作为所述目标视频的第一视频帧内的图像变化区域;将所述静止类对对应的所述目标视频的第一视频帧内的物体作为所述目标视频的第一视频帧内的图像静止区域。
- 根据权利要求2或3所述的方法,其特征在于,所述获取目标视频的第一视频帧内的图像变化区域和图像静止区域,包括:确定所述目标视频内与所述第一视频帧相邻的视频帧,作为第三视频帧;基于所述第一视频帧和所述第三视频帧确定所述第一视频帧内的图像变化区域,所述第一视频帧内的所述图像变化区域之外的区域作为所述图像静止区域。
- 根据权利要求6所述的方法,其特征在于,所述确定所述目标视频内与所述第一视频帧相邻的视频帧,作为第三视频帧,包括:确定所述目标视频内所述第一视频帧下一帧作为第三视频帧。
- 根据权利要求7所述的方法,其特征在于,所述得到第二视频帧之后,还包括:依次播放所述第一视频帧、所述第二视频帧和所述第三视频帧。
- 根据权利要求2所述的方法,其特征在于,所述获取目标视频的第一视频帧内的图像变化区域和图像静止区域,包括:将第一视频帧的所有运动物体作为备选物体;基于参考信息由所述备选物体中确定指定物体;将所述第一视频帧内所述指定物体对应的图像区域作为图像变化区域,其他的图像区域作为图像静止区域。
- 根据权利要求9所述的方法,其特征在于,所述参考信息包括用户画像,所述基于参考信息由所述备选物体中确定指定物体,包括:基于所述用户画像由备选物体中确定指定物体。
- 根据权利要求10所述的方法,其特征在于,所述用户画像包括N个用户标签,所述N为大于或等于1的整数,所述基于所述用户画像由备选物体中确定所述指定物体,包括:基于任意至少一个用户标签由备选物体中确定指定物体。
- 根据权利要求9所述的方法,其特征在于,所述参考信息包括用户预先输入的选定目标物,所述基于参考信息由所述备选物体中确定指定物体,包括:将所述备选物体中与所述选定目标物匹配的物体作为指定物体。
- 根据权利要求12所述的方法,其特征在于,所述将所述备选物体中与所述选定目标物匹配的物体作为指定物体之前,还包括:获取用户在指定界面上输入的触摸手势;确定所述指定界面内目标位置对应的选定目标物,其中,所述目标位置为所述触摸手势对应的位置。
- 根据权利要求13所述的方法,其特征在于,所述指定界面内显示有视频人物列表,所述视频人物列表内包括在所述目标视频内出现的至少部分人物的身份标识;所述确定所述指定界面内目标位置对应的选定目标物,包括:在所述指定界面的视频人物列表内,确定所述目标位置对应的身份标识,将所述身份标识对应的人物作为选定目标物。
- 一种视频处理装置,其特征在于,应用于电子设备,所述电子设备包括屏幕和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,所述视频处理装置包括:获取单元,用于获取目标视频的第一视频帧内的待优化区域和非待优化区域;确定单元,用于确定所述待优化区域对应的所述屏幕的指定显示区域;优化单元,用于控制所述指定显示区域对应的指定图像处理模块对所述待优化区域内的第一图像数据执行图像优化处理;处理单元,用于基于已执行所述图像优化处理后的第一图像数据和所述非待优化区域对应的第二图像数据得到至少一张图像,作为第二视频帧。
- 一种视频播放器,其特征在于,应用于电子设备,所述电子设备包括屏幕,所述视频播放器包括数据处理器和多个图像处理模块,所述屏幕包括多个显示区域,每个所述显示区域对应至少一个所述图像处理模块,所述数据处理器与每个所述图像处理模块连接,所述视频播放器用于执行权利要求1-14任一项所述的方法。
- 根据权利要求16所述的视频播放器,还包括:视频合成模块,每个所述图像处理模块均与所述视频合成模块连接,所述数据处理器还用于:控制所述指定图像处理模块对所述第一图像数据执行图像优化处理并发送至所述视频合成模块;控制所述指定图像处理模块之外的图像处理模块将所述第一视频帧内所述非待优化区域内的第二图像数据发送至所述视频合成模块;所述视频合成模块用于对每个所述指定图像处理模块发送的图像数据合成为第二视频帧。
- 根据权利要求17所述的视频播放器,所述电子设备还包括图形处理器(Graphics Processing Unit,GPU),所述数据处理器包括图像分析模块和控制模块,所述图形处理器、图像分析模块、控制模块和视频合成模块依次连接;所述图形处理器用于对目标视频执行绘图处理,得到目标视频的第一视频帧,并将所述第一视频帧发送至所述图像分析模块;所述图像分析模块用于确定第一视频帧内的待优化区域和非待优化区域,并将所述待优化区域和非待优化区域的位置信息发送至控制模块;所述控制模块用于基于所述待优化区域的位置信息确定所述待优化区域对应的所述屏幕的指定显示区域,控制所述指定图像处理模块对所述第一图像数据执行图像优化处理并发送至所述视频合成模块,并控制所述指定图像处理模块之外的图像处理模块将所述第一视频帧内所述非待优化区域内的第二图像数据发送至所述视频合成模块。
- 一种电子设备,其特征在于,包括:屏幕和权利要求16-18任一项所述视频播放器,所述视频播放器和所述屏幕连接。
- 一种计算机可读介质,其特征在于,所述计算机可读介质存储有处理器可执行的程序代码,所述程序代码被所述处理器执行时使所述处理器执行权利要求1-14任一项所述方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110401346.2 | 2021-04-14 | ||
CN202110401346.2A CN113132800B (zh) | 2021-04-14 | 2021-04-14 | 视频处理方法、装置、视频播放器、电子设备及可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022218042A1 true WO2022218042A1 (zh) | 2022-10-20 |
Family
ID=76776378
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/078141 WO2022218042A1 (zh) | 2021-04-14 | 2022-02-28 | 视频处理方法、装置、视频播放器、电子设备及可读介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113132800B (zh) |
WO (1) | WO2022218042A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113132800B (zh) * | 2021-04-14 | 2022-09-02 | Oppo广东移动通信有限公司 | 视频处理方法、装置、视频播放器、电子设备及可读介质 |
CN117234320B (zh) * | 2023-11-15 | 2024-02-23 | 深圳市鸿茂元智光电有限公司 | 一种led显示屏节能显示方法、系统和显示屏 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090060278A1 (en) * | 2007-09-04 | 2009-03-05 | Objectvideo, Inc. | Stationary target detection by exploiting changes in background model |
CN106652972A (zh) * | 2017-01-03 | 2017-05-10 | 京东方科技集团股份有限公司 | 显示屏的处理电路、显示方法及显示器件 |
CN109242802A (zh) * | 2018-09-28 | 2019-01-18 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读介质 |
CN109379625A (zh) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备和计算机可读介质 |
CN109525901A (zh) * | 2018-11-27 | 2019-03-26 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备及计算机可读介质 |
CN110933497A (zh) * | 2019-12-10 | 2020-03-27 | Oppo广东移动通信有限公司 | 视频图像数据插帧处理方法及相关设备 |
US10819983B1 (en) * | 2019-10-01 | 2020-10-27 | Facebook, Inc. | Determining a blurriness score for screen capture videos |
CN113132800A (zh) * | 2021-04-14 | 2021-07-16 | Oppo广东移动通信有限公司 | 视频处理方法、装置、视频播放器、电子设备及可读介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5136669B2 (ja) * | 2011-03-18 | 2013-02-06 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2013029904A (ja) * | 2011-07-27 | 2013-02-07 | Sony Corp | 画像処理装置および画像処理方法 |
US20140002732A1 (en) * | 2012-06-29 | 2014-01-02 | Marat R. Gilmutdinov | Method and system for temporal frame interpolation with static regions excluding |
CN105847728A (zh) * | 2016-04-13 | 2016-08-10 | 腾讯科技(深圳)有限公司 | 一种信息处理方法及终端 |
CN105867867B (zh) * | 2016-04-19 | 2019-04-26 | 京东方科技集团股份有限公司 | 显示控制方法、装置及系统 |
CN109379629A (zh) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备及存储介质 |
CN109640151A (zh) * | 2018-11-27 | 2019-04-16 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备以及存储介质 |
CN110264473B (zh) * | 2019-06-13 | 2022-01-04 | Oppo广东移动通信有限公司 | 基于多帧图像的图像处理方法、装置及电子设备 |
CN111491208B (zh) * | 2020-04-08 | 2022-10-28 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备及计算机可读介质 |
-
2021
- 2021-04-14 CN CN202110401346.2A patent/CN113132800B/zh active Active
-
2022
- 2022-02-28 WO PCT/CN2022/078141 patent/WO2022218042A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090060278A1 (en) * | 2007-09-04 | 2009-03-05 | Objectvideo, Inc. | Stationary target detection by exploiting changes in background model |
CN106652972A (zh) * | 2017-01-03 | 2017-05-10 | 京东方科技集团股份有限公司 | 显示屏的处理电路、显示方法及显示器件 |
CN109242802A (zh) * | 2018-09-28 | 2019-01-18 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读介质 |
CN109379625A (zh) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备和计算机可读介质 |
CN109525901A (zh) * | 2018-11-27 | 2019-03-26 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备及计算机可读介质 |
US10819983B1 (en) * | 2019-10-01 | 2020-10-27 | Facebook, Inc. | Determining a blurriness score for screen capture videos |
CN110933497A (zh) * | 2019-12-10 | 2020-03-27 | Oppo广东移动通信有限公司 | 视频图像数据插帧处理方法及相关设备 |
CN113132800A (zh) * | 2021-04-14 | 2021-07-16 | Oppo广东移动通信有限公司 | 视频处理方法、装置、视频播放器、电子设备及可读介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113132800B (zh) | 2022-09-02 |
CN113132800A (zh) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109379625B (zh) | 视频处理方法、装置、电子设备和计算机可读介质 | |
US11601630B2 (en) | Video processing method, electronic device, and non-transitory computer-readable medium | |
CN110650368B (zh) | 视频处理方法、装置和电子设备 | |
US11418832B2 (en) | Video processing method, electronic device and computer-readable storage medium | |
JP2022528294A (ja) | 深度を利用した映像背景減算法 | |
US9922681B2 (en) | Techniques for adding interactive features to videos | |
US20220360736A1 (en) | Method for frame interpolation and related products | |
US10575067B2 (en) | Context based augmented advertisement | |
WO2020228406A1 (zh) | 图像风格化生成方法、装置及电子设备 | |
WO2022218042A1 (zh) | 视频处理方法、装置、视频播放器、电子设备及可读介质 | |
CN111147880A (zh) | 视频直播的互动方法、装置、系统、电子设备及存储介质 | |
CN111491208B (zh) | 视频处理方法、装置、电子设备及计算机可读介质 | |
CN112053370A (zh) | 基于增强现实的显示方法、设备及存储介质 | |
CN114222185B (zh) | 视频播放方法、终端设备及存储介质 | |
WO2020155984A1 (zh) | 人脸表情图像处理方法、装置和电子设备 | |
WO2023093792A1 (zh) | 一种图像帧的渲染方法及相关装置 | |
CN116546274A (zh) | 视频切分方法、选取方法、合成方法及相关装置 | |
CN110197459A (zh) | 图像风格化生成方法、装置及电子设备 | |
US12020469B2 (en) | Method and device for generating image effect of facial expression, and electronic device | |
CN114452645A (zh) | 生成场景图像的方法、设备和存储介质 | |
CN115714888B (zh) | 视频生成方法、装置、设备与计算机可读存储介质 | |
WO2024087971A1 (zh) | 用于图像处理的方法、装置及存储介质 | |
WO2023197284A1 (en) | Saliency-based adaptive color enhancement | |
CN118474410A (zh) | 视频生成方法、装置和存储介质 | |
CN116998145A (zh) | 用于基于显著性的帧颜色增强的方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22787272 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22787272 Country of ref document: EP Kind code of ref document: A1 |