WO2020038128A1 - Video processing method and device, electronic device and computer readable medium - Google Patents
Video processing method and device, electronic device and computer readable medium Download PDFInfo
- Publication number
- WO2020038128A1 WO2020038128A1 PCT/CN2019/094442 CN2019094442W WO2020038128A1 WO 2020038128 A1 WO2020038128 A1 WO 2020038128A1 CN 2019094442 W CN2019094442 W CN 2019094442W WO 2020038128 A1 WO2020038128 A1 WO 2020038128A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- image data
- screen
- client
- video file
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000009877 rendering Methods 0.000 claims abstract description 99
- 238000000034 method Methods 0.000 claims abstract description 70
- 238000012545 processing Methods 0.000 claims abstract description 33
- 238000005457 optimization Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 16
- 230000006399 behavior Effects 0.000 claims description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000001815 facial effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000015219 food category Nutrition 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000000366 juvenile effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Definitions
- the present application relates to the technical field of video processing, and more particularly, to a video processing method, device, electronic device, and computer-readable medium.
- the present application proposes a video processing method, apparatus, electronic device, and computer-readable medium to improve the above defects.
- an embodiment of the present application provides a video processing method, which is applied to an image processor of an electronic device.
- the electronic device further includes a screen.
- the method includes: acquiring multi-frame image data to be rendered corresponding to a video file. Storing the multi-frame image data in an off-screen rendering buffer; optimizing the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm; sending the optimized multi-frame image data to all The frame buffer corresponding to the screen is read; the optimized multi-frame image data is read from the frame buffer and displayed on the screen.
- an embodiment of the present application further provides a video processing apparatus, which is applied to an image processor of an electronic device, and the electronic device further includes a screen.
- the video processing device includes an acquisition unit, a first storage unit, an optimization unit, a second storage unit, and a display unit.
- An obtaining unit is configured to obtain multi-frame image data to be rendered corresponding to a video file.
- the first storage unit is configured to store the multi-frame image data in an off-screen rendering buffer.
- the optimization unit is configured to optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
- the second storage unit is configured to send the optimized multi-frame image data to a frame buffer corresponding to the screen.
- the display unit is configured to read the optimized multi-frame image data from the frame buffer and display the optimized multi-frame image data on the screen.
- an embodiment of the present application further provides an electronic device, including: an image processor, a memory, a screen, and one or more clients. Wherein the one or more clients are stored in the memory and configured to be executed by the image processor, and the one or more programs are configured to perform the above method.
- an embodiment of the present application further provides a computer-readable storage medium.
- the computer-readable storage medium stores program code, and the program code can be called by a processor to execute the foregoing method.
- FIG. 1 shows a block diagram of a video playback architecture provided by an embodiment of the present application
- FIG. 2 shows a block diagram of an image rendering architecture provided by an embodiment of the present application
- FIG. 3 shows a method flowchart of a video processing method according to an embodiment of the present application
- FIG. 4 is a schematic diagram of a video list interface of a client provided in an embodiment of the present application.
- FIG. 5 shows a specific method flowchart of S302 to S305 in the method corresponding to FIG. 3;
- FIG. 6 shows a method flowchart of a video processing method according to another embodiment of the present application.
- FIG. 7 shows a method flowchart of a video processing method according to another embodiment of the present application.
- FIG. 8 is a block diagram of a video playback architecture provided by another embodiment of the present application.
- FIG. 9 shows a module block diagram of a video processing apparatus according to an embodiment of the present application.
- FIG. 10 is a structural block diagram of an electronic device according to an embodiment of the present application.
- FIG. 11 illustrates a storage unit for storing or carrying a program code for implementing a video processing method according to an embodiment of the present application.
- FIG. 1 shows a block diagram of a video playback architecture.
- the next job is to parse the audio and video data.
- General video files are composed of two parts: video stream and audio stream. Different video formats have different audio and video packaging formats.
- the process of synthesizing audio and video streams into a file is called muxer, while the process of separating audio and video streams from media files is called a demoxer.
- Playing video files requires separating audio and video streams from the file stream. It performs decoding.
- the decoded video frames can be directly rendered, and the audio frames can be sent to the buffer of the audio output device for playback.
- the timestamps of video rendering and audio playback must be synchronized.
- video decoding may include hard decoding and soft decoding.
- the hardware decoding is to hand a part of the video data that was originally processed by a central processing unit (CPU) to a graphics processing unit (GPU).
- CPU central processing unit
- GPU graphics processing unit
- the GPU's parallel computing capacity is much higher than the CPU, which can greatly reduce the load on the CPU, after the CPU usage is low, you can run some other programs at the same time, of course, for the better
- processors such as the i5 2320, or any AMD quad-core processor, the difference between hardware and software is just a matter of personal preference.
- Media Framework obtains a video file to be played by the client through an API interface with the client, and delivers the video file to Video Decode.
- the Multimedia Framework is the multimedia in the Android system.
- Framework, MediaPlayer, MediaPlayerService and Stagefrightplayer constitute the basic framework of Android multimedia.
- the multimedia framework adopts the C / S structure.
- MediaPlayer acts as the client of the C / S structure
- MediaPlayerService and Stagefrightplayer serve as the server of the C / S structure. They are responsible for playing multimedia files.
- Stagefrightplayer the server completes the client request Respond.
- VideoDecode is a super decoder that integrates the most commonly used audio and video decoding and playback for decoding video data.
- Soft decoding means that the CPU is used to decode the video by software. After decoding, the GPU is called to merge and render the video and display it on the screen. Hard decoding refers to the independent completion of video decoding tasks through a dedicated daughter card device without resorting to the CPU.
- the decoded video data will be sent to SurfaceFlinger, and SurfaceFlinger will render and synthesize the decoded video data and display it on the display.
- SurfaceFlinger is an independent service that receives all the Surface of the Window as input, calculates the position of each Surface in the final composite image according to the parameters of ZOrder, transparency, size, position, etc., and then sends it to HWComposer or OpenGL to generate the final Display buffer, and then display to a specific display device.
- the CPU decodes the video data and passes it to SurfaceFlinger for rendering and compositing, and after hard decoding is decoded by the GPU, it passes to SurfaceFlinger for rendering and compositing.
- the SurfaceFlinger will call the GPU to render and composite the image, and display it on the display.
- the image rendering process is shown in Figure 2.
- the CPU obtains the video file to be played sent by the client, decodes the decoded video data, and sends the video data to the GPU.
- the rendering result is placed in The frame buffer (such as FrameBuffer in Figure 2), then the video controller will read the data of the frame buffer line by line according to the HSync signal, and then pass it to the display through digital-to-analog conversion.
- an embodiment of the present application provides a video processing method.
- the method is applied to an image processor of an electronic device to improve the image quality effect during video playback. Specifically, refer to FIG. 3.
- the video processing method shown below includes: S301 to S305.
- S301 Obtain multi-frame image data to be rendered corresponding to a video file.
- the electronic device can obtain the video file to be played, and then decode the video file.
- the above-mentioned soft decoding or hard decoding can be used to decode the video file, and then decode the video file.
- the multi-frame image data to be rendered corresponding to the video file can be obtained, and then the multi-frame image data needs to be rendered before being displayed on the display screen.
- the electronic device includes a central processing unit and an image processing unit, and a specific implementation manner of acquiring multi-frame image data to be rendered corresponding to a video file.
- the central processing unit acquires a video file to be played sent by a client as an implementation manner.
- the central processing unit obtains a video playback request sent by the client.
- the video playback request includes a video file to be played.
- the video playback request may include identity information of the video file to be played.
- the identity information may be a video file. Name, based on the identity information of the video file, the video file can be found in the storage space where the video file is stored.
- the video playback request can be obtained for the touch state of the play buttons corresponding to different video files on the client's interface.
- the client's video list interface displays multiple video corresponding Display content, as shown in FIG. 1.
- the display content corresponding to multiple videos includes a thumbnail corresponding to each video.
- the thumbnail can be used as a touch button.
- the client can detect the user's selection. Click on the thumbnail to determine the video file you want to play.
- the client responds to the video selected by the user in the video list, enters the video playback interface, and clicks the play button of the playback interface.
- the client can detect the video file that the user currently clicks by monitoring the user ’s touch operation. Then, The client sends the video file to the CPU, and the CPU selects hard decoding or soft decoding to decode the video file.
- the central processing unit obtains a video file to be played, and processes the video file according to a soft decoding algorithm to obtain multi-frame image data corresponding to the video file.
- the image processor obtains the multi-frame image data corresponding to the video file and stores the multi-frame image data in the off-screen rendering buffer.
- the specific implementation manner may be: intercepting the multi-frame corresponding to the video file sent by the central processor to the frame buffer Frame image data, storing the intercepted multi-frame image data to the off-screen rendering buffer.
- a program plug-in may be provided in the image processor, and the program plug-in detects a video file to be rendered sent by the central processor to the image processor.
- the central processing unit decodes the video file to obtain the image data to be rendered, the image data to be rendered is sent to the GPU, and then intercepted by the program plug-in and stored in the off-screen rendering buffer.
- S302 Store the multi-frame image data in an off-screen rendering buffer.
- an off-screen rendering buffer is set in the GPU in advance.
- the GPU will call the rendering client module to render and synthesize the multi-frame image data to be rendered and send it to the display screen for display.
- the rendering The client module can be an OpenGL module.
- the final position of the OpenGL rendering pipeline is in the frame buffer.
- Frame buffer is a series of two-dimensional pixel storage array, including color buffer, depth buffer, template buffer and accumulation buffer.
- OpenGL uses the frame buffer provided by the window system.
- OpenGL's GL_ARB_framebuffer_object extension provides a way to create additional FrameBuffer Objects (FBOs). Using the frame buffer object, OpenGL can redirect the frame buffer originally drawn to the window to the FBO.
- FBO FrameBuffer Object
- the off-screen rendering buffer may be a storage space corresponding to the image processor, that is, the off-screen rendering buffer itself does not have a space for storing the image, but is mapped to a storage space in the image processor, and the actual image is It is stored in a storage space in the image processor corresponding to the off-screen rendering buffer.
- the multi-frame image data can be stored in the off-screen rendering buffer, that is, the multi-frame image data can be found in the off-screen rendering buffer.
- the optimization of the multi-frame image data may include adding a new special effect in the image data, for example, adding a special effect layer to the image data to achieve the effect of the special effect.
- optimizing the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm includes: optimizing image parameters of the multi-frame image data in the off-screen rendering buffer.
- the image parameter optimization includes at least one of enhancement of exposure, denoising, edge sharpening, increase of contrast, or increase of saturation.
- the decoded image data is RGBA format data
- the RGBA format data needs to be converted to the HSV format.
- the histogram of the image data is obtained, and the histogram statistics are obtained to obtain RGBA format data is converted to HSV format parameters, and according to this parameter, RGBA format data is converted to HSV format.
- the exposure is enhanced.
- the brightness value can be increased in the area where the brightness value crosses through the histogram of the image.
- the brightness of the image can be increased through non-linear superposition.
- T and I are both [0,1] values. If one effect is not good, the algorithm can iterate multiple times.
- image data denoising is used to remove image noise.
- images are often degraded due to the interference and influence of various noises, which affects subsequent image processing and image visual effects. Will have adverse effects.
- noise such as electrical noise, mechanical noise, channel noise, and other noise. Therefore, in order to suppress noise, improve image quality, and facilitate higher-level processing, the image must be denoised pre-processed. From the perspective of the probability distribution of noise, it can be divided into Gaussian noise, Rayleigh noise, gamma noise, exponential noise, and uniform noise.
- the image can be denoised by a Gaussian filter, where the Gaussian filter is a linear filter that can effectively suppress noise and smooth the image. Its working principle is similar to the mean filter, which takes the mean value of the pixels in the filter window as the output.
- the coefficients of the window template and the mean filter are different, and the template coefficients of the mean filter are all the same; while the template coefficient of the Gaussian filter decreases as the distance from the template center increases. Therefore, the Gaussian filter has less blurred image than the mean filter.
- a 5 ⁇ 5 Gaussian filter window is generated, and the center position of the template is used as the coordinate origin to sample.
- the coordinates of each position of the template are brought into the Gaussian function, and the value obtained is the coefficient of the template.
- Convolution of the Gaussian filter window and the image can denoise the image.
- edge sharpening is used to make blurred images clearer.
- image sharpening There are two methods of image sharpening: one is the differential method, and the other is the high-pass filtering method.
- contrast stretching is a method of image enhancement and also belongs to a grayscale transformation operation.
- gray scale transformation the gray value is stretched to the entire range of 0-255, then its contrast is obviously greatly enhanced.
- I (x, y) [(I (x, y) -Imin) / (Imax-Imin)] (MAX-MIN) + MIN;
- Imin and Imax are the minimum and maximum gray values of the original image
- MIN and MAX are the minimum and maximum gray values of the gray space to be stretched.
- the video enhancement algorithm can increase the image quality.
- the corresponding video enhancement algorithm can be selected based on the video file.
- the multi-frame image data in the off-screen rendering buffer is optimized according to a preset video enhancement algorithm.
- the method further includes: obtaining a video type corresponding to the video file; and determining a video enhancement algorithm based on the video type.
- a preset number of images in the video file are acquired, and as an image sample, all objects in each image in the image sample are analyzed, and thus the proportion of each object in the image sample can be determined.
- the object may include Animals, people, food, etc., can determine the category of the image based on the determined proportion of each object, thereby determining the category of the video file, where the category of the image includes a person category, an animal category, a food category, a landscape category, and the like.
- the video enhancement algorithm corresponding to the video file is determined according to the correspondence between the video type and the video enhancement algorithm.
- the video enhancement algorithm may include at least one of exposure enhancement, denoising, edge sharpening, increased contrast, or increased saturation.
- Type, the exposure enhancement, denoising, edge sharpening, contrast increase, or saturation increase of the corresponding types of videos are different. For example, as shown in Table 1:
- the video enhancement algorithm corresponding to the video file can be determined.
- S304 Send the optimized multi-frame image data to a frame buffer corresponding to the screen.
- the frame buffer corresponds to the screen and is used to store data to be displayed on the screen, such as the Framebuffer shown in FIG. 2.
- the Framebuffer is a driver interface that appears in the operating system kernel. Taking the Android system as an example, Linux works in protected mode, so user mode processes cannot use interrupt calls provided in the graphics card BIOS to directly write data and display on the screen like DOS systems. Linux abstracts Framebuffer This device is used by the user process to directly write data and display it on the screen.
- the Framebuffer mechanism mimics the functions of a graphics card, and can directly operate on the video memory by reading and writing Framebuffer. Specifically, the framebuffer can be regarded as an image of the display memory, and after it is mapped to the process address space, read and write operations can be performed directly, and the written data can be displayed on the screen.
- the frame buffer can be regarded as a space for storing data.
- the CPU or GPU puts the data to be displayed in the frame buffer, and the Framebuffer itself does not have any ability to calculate data.
- the frame buffer is read by the video controller according to the screen refresh frequency. The data inside is displayed on the screen.
- S305 Read the optimized multi-frame image data from the frame buffer and display it on the screen.
- the optimized multi-frame image data is stored in the frame buffer
- the image processor detects the data written in the frame buffer
- the optimized multi-frame image data is read from the frame buffer. And displayed on the screen.
- the image processor reads the optimized multi-frame image data frame by frame from the frame buffer according to the refresh frequency of the screen, and displays the optimized multi-frame image data on the screen after rendering and synthesis processing.
- the method is a further description of S302 to S305 in the method corresponding to FIG. 3, and the method includes: S501 to S516.
- FBO can be regarded as the above-mentioned off-screen rendering buffer.
- Vertex cache, index cache, texture cache, and template cache in GPU's video memory The texture cache is the storage space for texture data. Since FBO does not have real storage space, a new temporary texture is created and the temporary texture is stored. Binding to FBO can realize the mapping relationship between temporary texture and FBO. Because temporary texture as a variable has a certain storage space in video memory, the actual storage space of FBO is the storage space of temporary texture. Therefore, a certain amount of video memory can be allocated for the FBO.
- the rendering object is the multi-frame image data to be rendered corresponding to the video file.
- the multi-frame image data can be stored in the FBO through the rendering object, where the rendering object can be used as a variable to assign the multi-frame image data to Rendering the object, and then binding the rendering object to the FBO, can realize storing the multi-frame image data corresponding to the video file to the off-screen rendering buffer.
- the handle can be a rendering object.
- the multi-frame image data to be rendered corresponding to the video file is stored in the storage space corresponding to the rendering object, the multi-frame image data is written to the FBO by mapping instead of being stored in the FBO. Therefore, clearing the FBO will not delete multi-frame image data.
- Shader is the code of the shader (including vertex shader, fragment shader, etc.).
- Shader program The engine (program) responsible for executing the shader. Used to perform the operation specified by the previous shader code.
- the HQV algorithm is the video enhancement algorithm described above.
- the video enhancement algorithm is bound to the Shader Program, and the program defines how to execute the video enhancement algorithm. That is, the specific algorithm execution process can be written into the Shader Program. Program so that the GPU can execute the video enhancement algorithm.
- S505 Determine whether the optimization is performed for the first time.
- each optimization for the video file is recorded. For example, a number of times variable is set, and each time the optimization is performed, 1 is added to the number of times variable. Determine whether it is the first time to perform the optimization operation, that is, whether to use the video enhancement algorithm to optimize the image data of the video file for the first time. If yes, execute S506, and if not, execute S507.
- an initial texture is also set. Specifically, the initial texture is used as a variable for inputting data into the temporary texture, and the content of the temporary texture is directly mapped into the FBO.
- the initial texture and the temporary texture are both used as data storage variables.
- the feature data corresponding to the video enhancement algorithm is written into the data texture object, where the data texture object is the temporary texture.
- the video enhancement algorithm is assigned to the initial texture, and the feature data corresponding to the video enhancement algorithm is passed to the temporary texture by the initial texture.
- the initial texture is assigned to the temporary texture.
- the feature data corresponding to the algorithm are the parameters of the video enhancement algorithm, for example, the values of various parameters of the median filtering in denoising.
- the feature data corresponding to the video enhancement algorithm is convolved with the multi-frame image data to be rendered to optimize the multi-frame image data to be rendered.
- the off-screen rendering is performed by rendering the rendering object and the data texture object.
- Multi-frame image data in the buffer is optimized. That is, the operation of rendering to texture (RTT) is performed.
- the rendering object at this time has been optimized by the video enhancement algorithm, that is, the rendering object is optimized multi-frame image data.
- the optimized multi-frame image data is sent to the Framebuffer for storage.
- the drawing texture is a texture used to draw an image, and it stores effect parameters, specifically, used to increase the effect on the image data, such as shadows and the like.
- the operation of rendering to texture is performed, except that the rendering object in this step is an optimized multi-frame image data, and the texture object is a drawing texture.
- the control screen displays the image data.
- FIG. 6 illustrates a video processing method provided by an embodiment of the present application. The method includes: S601 to S607.
- S601 Obtain a video playback request sent by a client, where the video playback request includes the video file.
- the client requesting video playback is determined, so as to obtain the identity of the client.
- the client is a client installed in an electronic device and has a video playback function.
- the client has an icon on the system desktop.
- the user can click the client's icon to open the client.
- the client can confirm the package name of the application that the user clicks.
- the package name of the video application can be obtained from the code in the system background.
- the name format is: com.android.video.
- the preset standard may be a standard set by a user according to actual usage requirements. For example, it may be that the name of the client needs to conform to a certain category, or the installation time of the client needs to be within the preset time period, or it may be The client's developer belongs to the preset list. According to different application scenarios, different preset standards can be set.
- the client meets the preset criteria, it means that the video played by the client has a lower definition or a lower video file size, and does not require an approximate screen refresh frequency, so the screen refresh frequency can be reduced.
- the refresh frequency of the screen corresponding to the client meeting the preset criteria is the preset frequency
- the electronic device obtains the refresh frequency of the current screen, and if the refresh frequency of the current screen is greater than the preset frequency, the The refresh frequency of the current screen is reduced to a preset frequency. If the refresh frequency of the current screen is less than or equal to the preset frequency, the refresh frequency of the current screen is kept unchanged. Specifically, if the refresh frequency of the current screen is equal to the preset frequency, the refresh frequency of the current screen is kept unchanged, and if the refresh frequency of the current screen is less than the preset frequency, the refresh frequency of the current screen is increased to a preset frequency.
- the client does not meet the preset criteria, it will determine the size relationship between the current screen refresh frequency and the preset frequency. If the current screen refresh frequency is less than the default frequency, the current screen refresh frequency will be increased to Default frequency, where the default frequency is greater than the preset frequency.
- the specific implementation of reducing the refresh frequency of the screen is: obtaining the identity of the client; determining whether the identity of the client meets the preset identity; , Then reduce the refresh frequency of the screen.
- the identity information of the client may be the name of the client or the package name, and a preset identifier is stored in the electronic device in advance.
- the preset identifier includes the identity identifiers of multiple preset clients, and the video played by the preset client
- the file size is small or the resolution is low, and the refresh frequency of the screen is not required to be too high, thereby reducing the power consumption of the electronic device by reducing the refresh frequency.
- a specific implementation manner of reducing the refresh frequency of the screen is: obtaining a category of the client, and determining whether the category of the client is a preset category, and if it is , Then reduce the refresh frequency of the screen.
- the preset category may be a category set by a user according to requirements, for example, it may be a self-media video client.
- the self-media video client has a smaller file size or lower resolution than a client for playing movies or a client for games, and it is necessary to determine whether the client is a video client.
- the type of the client is determined according to the identity, where the identity of the client may be the package name, name, etc. of the client.
- the correspondence between the identification of the client and the type of the client is stored in the electronic device in advance, as shown in Table 2 below:
- the type of the client corresponding to the video file can be determined.
- the category of the client may be a category set by the developer of the client for the client when it is opened, or a category set by the user for the client after the client is installed on the electronic device. For example, when a user installs a client on an electronic device, after the installation is completed and the client is entered, a dialog box is displayed instructing the user to set a category for the client.
- the specific category to which the client belongs can be set by the user according to requirements. For example, the user can set a social software as an audio category, or a video category, or a social category.
- client installation software is installed in the electronic device.
- a client list is set in the client installation software, in which the user can download the client and can update and open the client, and the client installation software can display different clients according to categories, such as audio Category, video category, or game category. Therefore, when the user uses the client installation software to install the client, the user can already know the category of the client.
- the client supports video playback, set the client type to video type. If it does not support video playback, only audio For the playback function, the client type is set to the audio type.
- whether the client supports the video playback function can be determined by the function description information contained in the client's function description information, for example, the supported playback formats to determine whether the video format playback is supported, or by detecting the client Whether a video playback module is played in the program module of the client, for example, a codec algorithm for playing a video, etc., can determine whether the client supports the video playback function.
- the category of the client can be determined according to the client ’s usage history That is, according to the usage record of the client within a certain period of time, it is determined whether the user tends to play video or audio.
- the operation behavior data of all users of the client within a preset period of time is obtained, where all users refer to all users who have installed the client, then the operation behavior data may be obtained from a server corresponding to the client,
- the user logs in to the client using the user account corresponding to the user, and the operation behavior data corresponding to the user account is sent to the server corresponding to the client, and the server compares the obtained operation behavior data with the User account corresponding storage.
- the electronic device sends an operation behavior query request for the client to a server corresponding to the client, and the server sends the operation behavior data of all users within a preset time period to the electronic device.
- the operation behavior data includes the name and time of the audio file being played and the name and time of the video file being played.
- the number and total time can also be obtained by the number of video files played by the client and the total time, and then the type of the client is determined according to the proportion of the total playing time of the audio and video files in the predetermined time period, specifically To obtain the proportion of the total playing time of the audio and video files in the predetermined time period.
- the proportion of the total playing time of the audio file in the predetermined time period is recorded as the proportion of the audio playing, and the video file is recorded.
- the proportion of the total playback time in the predetermined time period is recorded as the proportion of video playback. If the proportion of video playback is greater than the proportion of audio playback, the client type is set to the video type. If the proportion of audio playback is greater than video playback Ratio, the client's category is set to audio type. For example, if the preset time period is 30 days, or 720 hours, and the total playback time of the audio file is 200 hours, the audio playback ratio is 27.8%, and the total playback time of the video file is 330 hours, and the video playback ratio is 45.8%, the video playback proportion is greater than the audio playback proportion, then the client type is set to the video type.
- the electronic device sends a category query request for the client to the server, and the server determines the foregoing audio playback proportion and video playback proportion according to the operation behavior data corresponding to the client obtained in advance, and according to the audio playback proportion
- the type of the client is determined by the size relationship between the ratio and the video playback ratio. Specifically, reference may be made to the foregoing description.
- the clarity and type of the video played by the client most of the time can be determined, and thus it can be determined whether the client is a self-media video client. If so, then It is determined that the identity of the client meets a preset identity.
- S603 Obtain multi-frame image data to be rendered corresponding to the video file.
- S604 Store the multi-frame image data in an off-screen rendering buffer.
- S605 Optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
- S606 Send the optimized multi-frame image data to a frame buffer corresponding to the screen.
- S607 Read the optimized multi-frame image data frame by frame in the frame buffer based on the refresh frequency of the screen, and display it on the screen after rendering and synthesis processing.
- the video controller in the GPU reads the optimized multi-frame image data frame by frame from the frame buffer according to the screen refresh frequency, and displays it on the screen after rendering and synthesis processing, where
- the refresh frequency of the screen can be regarded as a clock signal.
- the optimized multi-frame image data is read frame by frame in the frame buffer, and is rendered on the screen after rendering and synthesis processing. On display.
- off-screen rendering instead of on-screen rendering can avoid optimizing the image data in the frame buffer by on-screen rendering, which may result in the video data being screened by the video controller before it is optimized
- the refresh frequency is taken out from the frame buffer and displayed on the screen.
- steps S601 and S602 are not limited to be performed before S603, but may be performed after S607, that is, the video can be played according to the current screen refresh frequency, and then the current screen refresh frequency Adjustment.
- the parts that are not described in detail in the foregoing steps reference may be made to the foregoing embodiments, and details are not described herein again.
- FIG. 7 illustrates a video processing method provided by an embodiment of the present application.
- the method includes: S701 to S706.
- S701 Obtain multi-frame image data to be rendered corresponding to a video file.
- S702 Determine whether the video file meets a preset condition.
- the preset condition is a condition set by the user according to actual use.
- the preset condition may be a category of obtaining a video file. If the category of the video file is a preset category, it is determined that the video file meets the preset condition. Specifically, For a manner of determining the category of the video file, refer to the foregoing embodiment.
- the method of the present application is used to optimize the video file for video enhancement. Since a new buffer is set outside the frame buffer, the video can be avoided if it is not enhanced. It is displayed on the screen. This process has certain requirements for the real-time performance of the video file playback. Therefore, it is possible to determine whether to perform a video enhancement algorithm according to the real-time performance. Specifically, determine the real-time performance level corresponding to the video file, Whether the real-time level of the video file satisfies a preset level. If so, S703 is performed; otherwise, the method ends.
- a real-time level of the video file is determined.
- the identifier of the client corresponding to the video file is determined, and then the real-time level of the video file is determined according to the identifier of the client.
- the identifier of the client that sends the playback request of the video file is determined, and the type of the client corresponding to the identifier of the client is determined.
- the real-time level corresponding to the video file is determined according to the type of the client. Specifically, the real-time level corresponding to the type of the client is stored in the electronic device, as shown in Table 3 below:
- the real-time level corresponding to the video file can be determined. For example, if the identifier of the client corresponding to the video file is Apk4, the corresponding category is social, and the corresponding real-time level is J1. Among them, J1 ranks highest, followed by J2 and J3.
- the preset level is a preset real-time level corresponding to the required video enhancement algorithm, and may be set by a user according to requirements.
- the preset level is J2 and below. If the real-time level corresponding to the video file is J3, the real-time level of the video file meets the preset level, that is, for video files with high real-time requirements, the video enhancement algorithm may not be executed to avoid video enhancement leading to video The delay of playback affects the user experience.
- S703 Store the multi-frame image data in an off-screen rendering buffer.
- an operation for determining whether the multi-frame image data needs to be stored in an off-screen rendering buffer may be added according to a user watching a video.
- the electronic device is provided with a camera, and the camera and the screen are disposed on the same side of the electronic device. Then, the person images collected by the camera are acquired, and the person images are determined to meet the preset person standards.
- the image data is stored in the off-screen rendering buffer.
- the operation of determining whether the character image meets the preset character standard may replace step S702. In other embodiments, the operation of determining whether the character image meets the preset character standard may be the same as the above step S702.
- the character image meets a preset character standard, if the preset character standard is met, whether the video file meets a preset condition, and if the preset condition is met, the multi-frame image data is stored To the off-screen rendering buffer.
- the video file satisfies a preset condition, and if the preset condition is met, then determine whether the character image meets a preset character standard, and if the preset character standard is met, store the multi-frame image data To the off-screen rendering buffer.
- a face image in a person image may be extracted, the identity information corresponding to the face image may be determined, and then the identity information is matched with the preset identity information. If the identity information matches, it is determined that the person image meets the preset person standard.
- the preset identity information is pre-stored identity information, and the identity information is an identifier for distinguishing different users.
- the face image is analyzed to obtain feature information, where the feature information may be facial features or facial contours, etc., and identity information is determined based on the feature information.
- the age stage of the user may also be determined based on the face image.
- face recognition is performed on the obtained face image information to identify the facial features of the current user.
- the face image is preprocessed, that is, the position of the face is accurately marked in the image, and the contour, skin color, texture, texture, and color characteristics of the face are detected, and according to different pattern features such as histogram features, color features, and template features , Structural features, Haar features, etc. pick out the useful information in the above facial features and analyze the age of the current user.
- pattern features such as histogram features, color features, and template features , Structural features, Haar features, etc.
- knowledge-based representation methods or algebraic features or statistical learning-based representation methods are used to model features of certain faces.
- the age stage can include children's stage, adolescent stage, youth stage, middle-aged stage and old age stage, etc., or it can start from the age of 10, and every 10 years old is divided into one age group, or it can be divided into two age groups. That is, the senile and non-senile stages.
- the requirements for video enhancement may be different at each age stage, for example, the display effect of the video at the age stage is not high.
- the scope of the preset stage may be the youth stage and the middle-aged stage, that is, the child stage, the juvenile stage, and the senior stage may not require enhanced processing of the video.
- S704 Optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
- S705 Send the optimized multi-frame image data to a frame buffer corresponding to the screen.
- S706 Read the optimized multi-frame image data from the frame buffer and display it on the screen.
- an HQV algorithm module is added to the GPU.
- This HQV algorithm module is a module for users to execute this video processing method.
- the image data to be rendered is sent to SurfaceFlinger after soft decoding. After being intercepted and optimized by the HQV algorithm module, it is sent to SurfaceFlinger for rendering and subsequent display operations on the screen.
- FIG. 9 shows a structural block diagram of a video processing apparatus 800 according to an embodiment of the present application.
- the apparatus may include: an obtaining unit 901, a first storage unit 902, an optimization unit 903, a second storage unit 904 and Display unit 905.
- the obtaining unit 901 is configured to obtain multi-frame image data to be rendered corresponding to a video file.
- the first storage unit 902 is configured to store the multi-frame image data in an off-screen rendering buffer.
- the optimization unit 903 is configured to optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
- the second storage unit 904 is configured to send the optimized multi-frame image data to a frame buffer corresponding to the screen.
- the display unit 905 is configured to read the optimized multi-frame image data from the frame buffer and display the optimized multi-frame image data on the screen.
- the coupling between the modules may be electrical, mechanical, or other forms of coupling.
- each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist separately physically, or two or more modules may be integrated into one module.
- the above integrated modules may be implemented in the form of hardware or software functional modules.
- FIG. 10 is a structural block diagram of an electronic device according to an embodiment of the present application.
- the electronic device 100 may be an electronic device capable of running a client, such as a smart phone, a tablet computer, or an e-book.
- the electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, a screen 140, and one or more clients, where one or more clients may be stored in the memory 120 and configured For execution by one or more processors 110, one or more programs are configured to perform the method as described in the foregoing method embodiment.
- the processor 110 may include one or more processing cores.
- the processor 110 uses various interfaces and lines to connect various parts in the entire electronic device 100, and executes or executes instructions, programs, code sets, or instruction sets stored in the memory 120 by calling or executing data stored in the memory 120 to execute Various functions and processing data of the electronic device 100.
- the processor 110 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). To implement a hardware form.
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- PDA Programmable Logic Array
- the processor 110 may include one or a combination of a central processing unit 111 (Central Processing Unit, CPU), an image processor 112 (Graphics Processing Unit, GPU), and a modem.
- CPU Central Processing Unit
- image processor 112 Graphics Processing Unit
- modem mainly handles the operating system, user interface, and client;
- the GPU is responsible for rendering and rendering of the displayed content;
- the modem is used for wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, and may be implemented by a communication chip alone.
- the memory 120 may include Random Access Memory (RAM), and may also include Read-Only Memory.
- the memory 120 may be used to store instructions, programs, codes, code sets, or instruction sets.
- the memory 120 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , Instructions for implementing the following method embodiments, and the like.
- the storage data area may also store data (such as phonebook, audio and video data, and chat history data) created by the terminal 100 during use.
- the screen 120 is used to display information input by the user, information provided to the user, and various graphical user interfaces of the electronic device. These graphical user interfaces may be composed of graphics, text, icons, numbers, videos, and any combination thereof. In one example, a touch screen may be disposed on the display panel so as to be integrated with the display panel.
- FIG. 11 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer-readable medium 1100 stores program code, and the program code can be called by a processor to execute a method described in the foregoing method embodiment.
- the computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM, a hard disk, or a ROM.
- the computer-readable storage medium 1100 includes a non-transitory computer-readable storage medium.
- the computer-readable storage medium 1100 has a storage space of a program code 1111 for performing any of the method steps in the above method. These program codes can be read from or written into one or more computer program products.
- the program code 1111 may be compressed in a suitable form, for example.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Processing (AREA)
Abstract
The present application relates to the technical field of video processing, and disclosed thereby are a video processing method and device, an electronic device and a computer readable medium. The method comprises: acquiring multiple frames of image data to be rendered that correspond to a video file; storing the multiple frames of image data in an off-screen rendering buffer; optimizing the multiple frames of image data in the off-screen rendering buffer according to a preset video enhancement algorithm; transmitting the optimized multiple frames of image data to a frame buffer corresponding to a screen; and the frame buffer reading the optimized multiple frames of image data and displaying the same on the screen. Therefore, the image quality when a video file is played back may be improved by means of optimizing the video file in another buffer, thus improving the user experience.
Description
相关申请的交叉引用Cross-reference to related applications
本申请要求于2018年08月23日提交中国专利局的申请号为CN 201810969497.6、名称为“视频处理方法、装置、电子设备及计算机可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on August 23, 2018 under the application number CN201810969497.6 and entitled "Video Processing Method, Apparatus, Electronic Equipment, and Computer-readable Media". Citations are incorporated in this application.
本申请涉及视频处理技术领域,更具体地,涉及一种视频处理方法、装置、电子设备及计算机可读介质。The present application relates to the technical field of video processing, and more particularly, to a video processing method, device, electronic device, and computer-readable medium.
随着电子技术和信息技术的发展,越来越多的设备能够播放视频。设备在视频播放的过程中,需要对视频执行解码、渲染以及合成等操作,再在显示屏上显示,但是,现有的视频播放技术中,所播放的视频的画质效果已经无法满足用户的需求,导致用户体验较差。With the development of electronic technology and information technology, more and more devices can play video. During the video playback process, the device needs to perform operations such as decoding, rendering, and compositing the video, and then display it on the display screen. However, in the existing video playback technology, the picture quality effect of the played video can no longer meet the user's requirements. Demand, resulting in poor user experience.
发明内容Summary of the Invention
本申请提出了一种视频处理方法、装置、电子设备及计算机可读介质,以改善上述缺陷。The present application proposes a video processing method, apparatus, electronic device, and computer-readable medium to improve the above defects.
第一方面,本申请实施例提供了一种视频处理方法,应用于电子设备的图像处理器,所述电子设备还包括屏幕,所述方法包括:获取视频文件对应的待渲染的多帧图像数据;将所述多帧图像数据存储至离屏渲染缓冲区;根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化;将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区;由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。In a first aspect, an embodiment of the present application provides a video processing method, which is applied to an image processor of an electronic device. The electronic device further includes a screen. The method includes: acquiring multi-frame image data to be rendered corresponding to a video file. Storing the multi-frame image data in an off-screen rendering buffer; optimizing the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm; sending the optimized multi-frame image data to all The frame buffer corresponding to the screen is read; the optimized multi-frame image data is read from the frame buffer and displayed on the screen.
第二方面,本申请实施例还提供了一种视频处理装置,应用于电子设备的图像处理器,所述电子设备还包括屏幕。所述视频处理装置包括:获取单元、第一存储单元、优化单元、第二存储单元和显示单元。获取单元,用于获取视频文件对应的待渲染的多帧图像数据。第一存储单元,用于将所述多帧图像数据存储至离屏渲染缓冲区。优化单元,用于根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化。第二存储单元,用于将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区。显示单元,用于由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。In a second aspect, an embodiment of the present application further provides a video processing apparatus, which is applied to an image processor of an electronic device, and the electronic device further includes a screen. The video processing device includes an acquisition unit, a first storage unit, an optimization unit, a second storage unit, and a display unit. An obtaining unit is configured to obtain multi-frame image data to be rendered corresponding to a video file. The first storage unit is configured to store the multi-frame image data in an off-screen rendering buffer. The optimization unit is configured to optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm. The second storage unit is configured to send the optimized multi-frame image data to a frame buffer corresponding to the screen. The display unit is configured to read the optimized multi-frame image data from the frame buffer and display the optimized multi-frame image data on the screen.
第三方面,本申请实施例还提供了一种电子设备,包括:图像处理器、存储器、屏幕和一个或多个客户端。其中所述一个或多个客户端被存储在所述存储器中并被 配置为由所述图像处理器执行,所述一个或多个程序配置用于执行上述方法。In a third aspect, an embodiment of the present application further provides an electronic device, including: an image processor, a memory, a screen, and one or more clients. Wherein the one or more clients are stored in the memory and configured to be executed by the image processor, and the one or more programs are configured to perform the above method.
第四方面,本申请实施例还提供了一种计算机可读取存储介质,计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述方法。In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium. The computer-readable storage medium stores program code, and the program code can be called by a processor to execute the foregoing method.
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions in the embodiments of the present application more clearly, the drawings used in the description of the embodiments are briefly introduced below. Obviously, the drawings in the following description are just some embodiments of the application. For those skilled in the art, other drawings can be obtained based on these drawings without paying creative labor.
图1示出了本申请一实施例提供的视频播放架构的框图;FIG. 1 shows a block diagram of a video playback architecture provided by an embodiment of the present application;
图2示出了本申请实施例提供的图像渲染架构的框图;FIG. 2 shows a block diagram of an image rendering architecture provided by an embodiment of the present application;
图3示出了本申请一实施例提供的视频处理方法的方法流程图;FIG. 3 shows a method flowchart of a video processing method according to an embodiment of the present application;
图4示出了本申请实施例提供的客户端的视频列表界面的示意图;FIG. 4 is a schematic diagram of a video list interface of a client provided in an embodiment of the present application; FIG.
图5示出了图3对应的方法中的S302至S305的具体方法流程图;FIG. 5 shows a specific method flowchart of S302 to S305 in the method corresponding to FIG. 3; FIG.
图6示出了本申请另一实施例提供的视频处理方法的方法流程图;FIG. 6 shows a method flowchart of a video processing method according to another embodiment of the present application;
图7示出了本申请又一实施例提供的视频处理方法的方法流程图;7 shows a method flowchart of a video processing method according to another embodiment of the present application;
图8示出了本申请另一实施例提供的视频播放架构的框图;8 is a block diagram of a video playback architecture provided by another embodiment of the present application;
图9示出了本申请实施例提供的视频处理装置的模块框图;FIG. 9 shows a module block diagram of a video processing apparatus according to an embodiment of the present application;
图10示出了本申请实施例提供的电子设备的结构框图;FIG. 10 is a structural block diagram of an electronic device according to an embodiment of the present application; FIG.
图11示出了本申请实施例的用于保存或者携带实现根据本申请实施例的视频处理方法的程序代码的存储单元。FIG. 11 illustrates a storage unit for storing or carrying a program code for implementing a video processing method according to an embodiment of the present application.
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。In order to enable those skilled in the art to better understand the solution of the present application, the technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
请参阅图1,示出了视频播放架构的框图。具体地,操作系统在获取到待播放的数据的时候,接下来的工作就是解析音视频数据了。一般的视频文件都有视频流和音频流两部分组成,不同的视频格式音视频的封装格式肯定不一样。将音频流和视频流合成文件的过程称为muxer,反之从媒体文件中分离音频流和视频流的过程称为demuxer.播放视频文件就需要从文件流中分离出音频流和视频流,分别对其进行解码,解码后的视频帧可以直接渲染,音频帧可以送到音频输出设备的缓冲区进行播放,当然,视频渲染和音频播放的时间戳一定要控制同步。Please refer to FIG. 1, which shows a block diagram of a video playback architecture. Specifically, when the operating system obtains the data to be played, the next job is to parse the audio and video data. General video files are composed of two parts: video stream and audio stream. Different video formats have different audio and video packaging formats. The process of synthesizing audio and video streams into a file is called muxer, while the process of separating audio and video streams from media files is called a demoxer. Playing video files requires separating audio and video streams from the file stream. It performs decoding. The decoded video frames can be directly rendered, and the audio frames can be sent to the buffer of the audio output device for playback. Of course, the timestamps of video rendering and audio playback must be synchronized.
具体地,视频解码可以包括硬解码和软解码,硬件解码是将原来全部交由中央处理器(Central Processing Unit,CPU)来处理的视频数据的一部分交由图像处理器(Graphics Processing Unit,GPU)来做,而GPU的并行运算能力要远远高于CPU,这样可以大大的降低对CPU的负载,CPU的占用率较低了之后就可以同时运行一些其他的程序了,当然,对于较好的处理器来说,比如i5 2320,或者AMD任何一款四核心处理器来说,硬解和软件的区别只是个人偏好问题了吧。Specifically, video decoding may include hard decoding and soft decoding. The hardware decoding is to hand a part of the video data that was originally processed by a central processing unit (CPU) to a graphics processing unit (GPU). To do, and the GPU's parallel computing capacity is much higher than the CPU, which can greatly reduce the load on the CPU, after the CPU usage is low, you can run some other programs at the same time, of course, for the better For processors, such as the i5 2320, or any AMD quad-core processor, the difference between hardware and software is just a matter of personal preference.
具体地,如图1所示,Media Framework通过与客户端的API接口获取客户端待播放的视频文件,并交由视频解码器(Video Decode),其中,多媒体框架(Media Framework)为Android系统中多媒体框架,MediaPlayer、MediaPlayerService和Stagefrightplayer三个部分构成了Android多媒体的基本框架。多媒体框架部分采用 了C/S的结构,MediaPlayer作为C/S结构的Client端,MediaPlayerService和Stagefrightplayer作为C/S结构Server端,承担着播放多媒体文件的责任,通过Stagefrightplayer,Server端完成Client端的请求并作出响应。Video Decode是一款集成了最常用的音频和视频解码与播放的超级解码器,用于将视频数据解码。Specifically, as shown in FIG. 1, Media Framework obtains a video file to be played by the client through an API interface with the client, and delivers the video file to Video Decode. Among them, the Multimedia Framework is the multimedia in the Android system. Framework, MediaPlayer, MediaPlayerService and Stagefrightplayer constitute the basic framework of Android multimedia. The multimedia framework adopts the C / S structure. MediaPlayer acts as the client of the C / S structure, and MediaPlayerService and Stagefrightplayer serve as the server of the C / S structure. They are responsible for playing multimedia files. Through Stagefrightplayer, the server completes the client request Respond. VideoDecode is a super decoder that integrates the most commonly used audio and video decoding and playback for decoding video data.
软解码,即通过软件让CPU来对视频进行解码处理,解码之后再调用GPU对视频渲染合并之后在屏幕上显示。而硬解码,指不借助于CPU,而通过专用的子卡设备来独立完成视频解码任务。Soft decoding means that the CPU is used to decode the video by software. After decoding, the GPU is called to merge and render the video and display it on the screen. Hard decoding refers to the independent completion of video decoding tasks through a dedicated daughter card device without resorting to the CPU.
不论是硬解码还是软解码,在将视频数据解码之后,会将解码后的视频数据发送至SurfaceFlinger,由SurfaceFlinger将解码后的视频数据渲染和合成之后,在显示屏上显示。其中,SurfaceFlinger是一个独立的Service,它接收所有Window的Surface作为输入,根据ZOrder、透明度、大小、位置等参数,计算出每个Surface在最终合成图像中的位置,然后交由HWComposer或OpenGL生成最终的显示Buffer,然后显示到特定的显示设备上。No matter hard decoding or soft decoding, after decoding the video data, the decoded video data will be sent to SurfaceFlinger, and SurfaceFlinger will render and synthesize the decoded video data and display it on the display. Among them, SurfaceFlinger is an independent service that receives all the Surface of the Window as input, calculates the position of each Surface in the final composite image according to the parameters of ZOrder, transparency, size, position, etc., and then sends it to HWComposer or OpenGL to generate the final Display buffer, and then display to a specific display device.
如图1所示,软解码中,CPU将视频数据解码之后交给SurfaceFlinger渲染和合成,而硬解码由GPU解码之后,交由SurfaceFlinger渲染和合成。而SurfaceFlinger会调用GPU实现图像的渲染和合成,并在显示屏上显示。As shown in Figure 1, in soft decoding, the CPU decodes the video data and passes it to SurfaceFlinger for rendering and compositing, and after hard decoding is decoded by the GPU, it passes to SurfaceFlinger for rendering and compositing. The SurfaceFlinger will call the GPU to render and composite the image, and display it on the display.
具体地,图像渲染的过程如图2所示,CPU获取客户端发送的待播放的视频文件,解码之后获取解码之后的视频数据,将视频数据发送至GPU,GPU渲染完成后将渲染结果放入帧缓冲区(如图2中的FrameBuffer),随后视频控制器会按照HSync信号逐行读取帧缓冲区的数据,经过数模转换传递给显示器显示。Specifically, the image rendering process is shown in Figure 2. The CPU obtains the video file to be played sent by the client, decodes the decoded video data, and sends the video data to the GPU. After the GPU rendering is complete, the rendering result is placed in The frame buffer (such as FrameBuffer in Figure 2), then the video controller will read the data of the frame buffer line by line according to the HSync signal, and then pass it to the display through digital-to-analog conversion.
但是,现有的视频播放,所播放的视频画质效果不佳,发明人研究其不佳的原因,是因为缺少对视频数据的增强优化。因此,为了解决该技术问题,本申请实施例提供了一种视频处理方法,该方法应用于电子设备的图像处理器,用于提高视频播放时的画质效果,具体地,请参阅图3所示的视频处理方法,该方法包括:S301至S305。However, in the existing video playback, the quality of the video played is poor, and the inventors have studied the reason for the poor quality because of the lack of enhancement and optimization of the video data. Therefore, in order to solve the technical problem, an embodiment of the present application provides a video processing method. The method is applied to an image processor of an electronic device to improve the image quality effect during video playback. Specifically, refer to FIG. 3. The video processing method shown below includes: S301 to S305.
S301:获取视频文件对应的待渲染的多帧图像数据。S301: Obtain multi-frame image data to be rendered corresponding to a video file.
具体地,当电子设备的客户端播放视频的时候,电子设备能够获取欲播放的视频文件,然后再对视频文件解码,具体地,可以采用上述的软解码或者硬解码对视频文件解码,在解码之后就能够获取视频文件对应的待渲染的多帧图像数据,之后需要将多帧图像数据渲染之后才能够在显示屏上显示。Specifically, when the client of the electronic device plays a video, the electronic device can obtain the video file to be played, and then decode the video file. Specifically, the above-mentioned soft decoding or hard decoding can be used to decode the video file, and then decode the video file. After that, the multi-frame image data to be rendered corresponding to the video file can be obtained, and then the multi-frame image data needs to be rendered before being displayed on the display screen.
具体地,电子设备包括中央处理器和图像处理器,获取视频文件对应的待渲染的多帧图像数据的具体实施方式,中央处理器获取客户端发送的待播放的视频文件,作为一种实施方式,中央处理器获取客户端发送的视频播放请求,该视频播放请求包括待播放的视频文件,具体地,可以是视频播放请求包括待播放的视频文件的身份信息,该身份信息可以是视频文件的名称,基于该视频文件的身份信息能够由存储该视频文件的存储空间内查找到该视频文件。Specifically, the electronic device includes a central processing unit and an image processing unit, and a specific implementation manner of acquiring multi-frame image data to be rendered corresponding to a video file. The central processing unit acquires a video file to be played sent by a client as an implementation manner. The central processing unit obtains a video playback request sent by the client. The video playback request includes a video file to be played. Specifically, the video playback request may include identity information of the video file to be played. The identity information may be a video file. Name, based on the identity information of the video file, the video file can be found in the storage space where the video file is stored.
具体地,可以对客户端的界面上的不同的视频文件对应的播放按钮的触控状态,获取视频播放请求,具体地,如图4所示,客户端的视频列表界面内显示有多个视频对应的显示内容,如图1中所示,多个视频对应的显示内容包括每个视频对应的缩略图,该缩略图可以作为一个触摸按键使用,用户点击该缩略图,客户端能够检测到用户所选点击的缩略图,也就能够确定欲播放的视频文件。Specifically, the video playback request can be obtained for the touch state of the play buttons corresponding to different video files on the client's interface. Specifically, as shown in FIG. 4, the client's video list interface displays multiple video corresponding Display content, as shown in FIG. 1. The display content corresponding to multiple videos includes a thumbnail corresponding to each video. The thumbnail can be used as a touch button. When the user clicks the thumbnail, the client can detect the user's selection. Click on the thumbnail to determine the video file you want to play.
客户端响应用户在视频列表内选中的视频,进入视频的播放界面,点击该播放界面的播放按钮,客户端通过对用户触控操作的监听,能够检测到用户当前所点击的视频文件,然后,客户端将视频文件发送至CPU,由CPU选择硬解码或者软解码对该视频文件进行解码。The client responds to the video selected by the user in the video list, enters the video playback interface, and clicks the play button of the playback interface. The client can detect the video file that the user currently clicks by monitoring the user ’s touch operation. Then, The client sends the video file to the CPU, and the CPU selects hard decoding or soft decoding to decode the video file.
于本申请实施例中,中央处理器获取待播放的视频文件,并根据软解码算法对所述视频文件处理,以获取到所述视频文件对应的多帧图像数据。In the embodiment of the present application, the central processing unit obtains a video file to be played, and processes the video file according to a soft decoding algorithm to obtain multi-frame image data corresponding to the video file.
则图像处理器获取视频文件对应的多帧图像数据并存储至离屏渲染缓冲区的具体实施方式可以是:拦截所述中央处理器发送至所述帧缓冲区的与所述视频文件对应的多帧图像数据,将所拦截的多帧图像数据存储至离屏渲染缓冲区。Then, the image processor obtains the multi-frame image data corresponding to the video file and stores the multi-frame image data in the off-screen rendering buffer. The specific implementation manner may be: intercepting the multi-frame corresponding to the video file sent by the central processor to the frame buffer Frame image data, storing the intercepted multi-frame image data to the off-screen rendering buffer.
具体地,可以是在图像处理器内设置一个程序插件,该程序插件检测中央处理器发送至图像处理器的待渲染的视频文件。则在中央处理器将视频文件解码获取到待渲染的图像数据时,将待渲染的图像数据发送至GPU,然后被程序插件截获,并存储至离屏渲染缓冲区。Specifically, a program plug-in may be provided in the image processor, and the program plug-in detects a video file to be rendered sent by the central processor to the image processor. When the central processing unit decodes the video file to obtain the image data to be rendered, the image data to be rendered is sent to the GPU, and then intercepted by the program plug-in and stored in the off-screen rendering buffer.
S302:将所述多帧图像数据存储至离屏渲染缓冲区。S302: Store the multi-frame image data in an off-screen rendering buffer.
作为一种实施方式,预先在GPU内设置一个离屏渲染缓冲区,具体地,GPU会调用渲染客户端模块对待渲染的多帧图像数据渲染合成之后发送至显示屏上显示,具体地,该渲染客户端模块可以是OpenGL模块。OpenGL渲染管线的最终位置是在帧缓冲区中。帧缓冲区是一系列二维的像素存储数组,包括了颜色缓冲区、深度缓冲区、模板缓冲区以及累积缓冲区。默认情况下OpenGL使用的是窗口系统提供的帧缓冲区。As an implementation manner, an off-screen rendering buffer is set in the GPU in advance. Specifically, the GPU will call the rendering client module to render and synthesize the multi-frame image data to be rendered and send it to the display screen for display. Specifically, the rendering The client module can be an OpenGL module. The final position of the OpenGL rendering pipeline is in the frame buffer. Frame buffer is a series of two-dimensional pixel storage array, including color buffer, depth buffer, template buffer and accumulation buffer. By default OpenGL uses the frame buffer provided by the window system.
OpenGL的GL_ARB_framebuffer_object这个扩展提供了一种方式来创建额外的帧缓冲区对象(Frame Buffer Object,FBO)。使用帧缓冲区对象,OpenGL可以将原先绘制到窗口提供的帧缓冲区重定向到FBO之中。OpenGL's GL_ARB_framebuffer_object extension provides a way to create additional FrameBuffer Objects (FBOs). Using the frame buffer object, OpenGL can redirect the frame buffer originally drawn to the window to the FBO.
则通过FBO在帧缓冲区之外再设置一个缓冲区,即离屏渲染缓冲区。然后,将所获取的多帧图像数据存储至离屏渲染缓冲区。具体地,离屏渲染缓冲区可以是对应图像处理器的一个存储空间,即离屏渲染缓冲区本身没有用于存储图像的空间,而是与图像处理器内的一个存储空间映射之后,图像实际存储在离屏渲染缓冲区对应的图像处理器内的一个存储空间内。Then set another buffer outside the frame buffer through FBO, that is, the off-screen rendering buffer. Then, the acquired multi-frame image data is stored in an off-screen rendering buffer. Specifically, the off-screen rendering buffer may be a storage space corresponding to the image processor, that is, the off-screen rendering buffer itself does not have a space for storing the image, but is mapped to a storage space in the image processor, and the actual image is It is stored in a storage space in the image processor corresponding to the off-screen rendering buffer.
将多帧图像数据与离屏渲染缓冲区绑定的方式,就能够将多帧图像数据存储至离屏渲染缓冲区,即在离屏渲染缓冲区能够查找到多帧图像数据。By binding the multi-frame image data to the off-screen rendering buffer, the multi-frame image data can be stored in the off-screen rendering buffer, that is, the multi-frame image data can be found in the off-screen rendering buffer.
S303:根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化。S303: Optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
作为一种实施方式,对多帧图像数据的优化可以包括在图像数据内增加新的特效,例如,将特效图层加在图像数据,以实现特效的效果。As an implementation manner, the optimization of the multi-frame image data may include adding a new special effect in the image data, for example, adding a special effect layer to the image data to achieve the effect of the special effect.
作为另一种实施方式,根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化,包括:对所述离屏渲染缓冲区内的多帧图像数据的图像参数优化,其中,所述图像参数优化包括曝光度增强、去噪、边缘锐化、对比度增加或饱和度增加的至少一种。As another implementation manner, optimizing the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm includes: optimizing image parameters of the multi-frame image data in the off-screen rendering buffer. Wherein, the image parameter optimization includes at least one of enhancement of exposure, denoising, edge sharpening, increase of contrast, or increase of saturation.
具体地,由于经过解码之后的图像数据为RGBA格式的数据,为了对图像数据优化,需要将RGBA格式的数据转换为HSV格式,具体地,获取图像数据的直方图,对直方图统计从而获取将RGBA格式的数据转换为HSV格式的参数,在根据该参数将RGBA格式的数据转换为HSV格式。Specifically, since the decoded image data is RGBA format data, in order to optimize the image data, the RGBA format data needs to be converted to the HSV format. Specifically, the histogram of the image data is obtained, and the histogram statistics are obtained to obtain RGBA format data is converted to HSV format parameters, and according to this parameter, RGBA format data is converted to HSV format.
其中,曝光度增强,用于提高图像的亮度,则可以通过图像的直方图,将亮度值交底的区域增加亮度值,另外,也可以是通过非线性叠加,增加图像亮度,具体地,I表示要处理的较暗图像,T表示处理后的比较亮的图像,则曝光度增强的方式为T(x)=I(x)+(1-I(x))*I(x)。其中,T和I都是[0,1]取值的图像。如果一次效果不好算法可以多次迭代。Among them, the exposure is enhanced. For increasing the brightness of the image, the brightness value can be increased in the area where the brightness value crosses through the histogram of the image. In addition, the brightness of the image can be increased through non-linear superposition. The darker image to be processed, T represents the lighter image after processing, then the way to increase the exposure is T (x) = I (x) + (1-I (x)) * I (x). Among them, T and I are both [0,1] values. If one effect is not good, the algorithm can iterate multiple times.
其中,对图像数据去噪用于去除图像的噪声,具体地,图像在生成和传输过程中常常因受到各种噪声的干扰和影响而是图像降质,这对后续图像的处理和图像视觉效应将产生不利影响。噪声种类很多,比如:电噪声,机械噪声,信道噪声和其他噪声。因此, 为了抑制噪声,改善图像质量,便于更高层次的处理,必须对图像进行去噪预处理。从噪声的概率分布情况来看,可分为高斯噪声、瑞利噪声、伽马噪声、指数噪声和均匀噪声。Among them, image data denoising is used to remove image noise. In particular, during image generation and transmission, images are often degraded due to the interference and influence of various noises, which affects subsequent image processing and image visual effects. Will have adverse effects. There are many types of noise, such as electrical noise, mechanical noise, channel noise, and other noise. Therefore, in order to suppress noise, improve image quality, and facilitate higher-level processing, the image must be denoised pre-processed. From the perspective of the probability distribution of noise, it can be divided into Gaussian noise, Rayleigh noise, gamma noise, exponential noise, and uniform noise.
具体地,可以通过高斯滤波器对图像去噪,其中,高斯滤波器是一种线性滤波器,能够有效的抑制噪声,平滑图像。其作用原理和均值滤波器类似,都是取滤波器窗口内的像素的均值作为输出。其窗口模板的系数和均值滤波器不同,均值滤波器的模板系数都是相同的为1;而高斯滤波器的模板系数,则随着距离模板中心的增大而系数减小。所以,高斯滤波器相比于均值滤波器对图像模糊程度较小。Specifically, the image can be denoised by a Gaussian filter, where the Gaussian filter is a linear filter that can effectively suppress noise and smooth the image. Its working principle is similar to the mean filter, which takes the mean value of the pixels in the filter window as the output. The coefficients of the window template and the mean filter are different, and the template coefficients of the mean filter are all the same; while the template coefficient of the Gaussian filter decreases as the distance from the template center increases. Therefore, the Gaussian filter has less blurred image than the mean filter.
例如,产生一个5×5的高斯滤波窗口,以模板的中心位置为坐标原点进行取样。将模板各个位置的坐标带入高斯函数,得到的值就是模板的系数。再将该高斯滤波窗口与图像卷积就能够对图像去噪。For example, a 5 × 5 Gaussian filter window is generated, and the center position of the template is used as the coordinate origin to sample. The coordinates of each position of the template are brought into the Gaussian function, and the value obtained is the coefficient of the template. Convolution of the Gaussian filter window and the image can denoise the image.
其中,边缘锐化用于使模糊的图像变得更加清晰起来。图像锐化一般有两种方法:一种是微分法,另外一种是高通滤波法。Among them, edge sharpening is used to make blurred images clearer. There are two methods of image sharpening: one is the differential method, and the other is the high-pass filtering method.
其中,对比度增加用于增强图像的画质,使得图像内的颜色更加鲜明,具体地,对比度拉伸是图像增强的一种方法,也属于灰度变换操作。通过灰度变换,将灰度值拉伸到整个0-255的区间,那么其对比度显然是大幅增强的。可以用如下的公式来将某个像素的灰度值映射到更大的灰度空间:Among them, increasing the contrast is used to enhance the image quality of the image and make the colors in the image more vivid. Specifically, contrast stretching is a method of image enhancement and also belongs to a grayscale transformation operation. By gray scale transformation, the gray value is stretched to the entire range of 0-255, then its contrast is obviously greatly enhanced. You can use the following formula to map the gray value of a pixel to a larger gray space:
I(x,y)=[(I(x,y)-Imin)/(Imax-Imin)](MAX-MIN)+MIN;I (x, y) = [(I (x, y) -Imin) / (Imax-Imin)] (MAX-MIN) + MIN;
其中Imin,Imax是原始图像的最小灰度值和最大灰度值,MIN和MAX是要拉伸到的灰度空间的灰度最小值和最大值。Among them, Imin and Imax are the minimum and maximum gray values of the original image, and MIN and MAX are the minimum and maximum gray values of the gray space to be stretched.
通过视频增强算法能够增加图像的画质,另外,还可以基于视频文件选择对应的视频增强算法,具体地,根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化之前,所述方法还包括:获取所述视频文件对应的视频类型;基于所述视频类型确定视频增强算法。The video enhancement algorithm can increase the image quality. In addition, the corresponding video enhancement algorithm can be selected based on the video file. Specifically, the multi-frame image data in the off-screen rendering buffer is optimized according to a preset video enhancement algorithm. Before, the method further includes: obtaining a video type corresponding to the video file; and determining a video enhancement algorithm based on the video type.
具体地,获取视频文件内预设数量的图像,作为图像样本,分析图像样本内每个图像的所有对象,由此,就能够确定图像样本中各个对象所占的比例,具体地,对象可以包括动物、人、食物等,基于所确定的各个对象的占比能够确定图像的类别,从而确定视频文件的类别,其中,图像的类别包括人物类、动物类、食物类、风景类等。Specifically, a preset number of images in the video file are acquired, and as an image sample, all objects in each image in the image sample are analyzed, and thus the proportion of each object in the image sample can be determined. Specifically, the object may include Animals, people, food, etc., can determine the category of the image based on the determined proportion of each object, thereby determining the category of the video file, where the category of the image includes a person category, an animal category, a food category, a landscape category, and the like.
再根据视频类型与视频增强算法的对应关系,确定视频文件对应的视频增强算法,具体地,该视频增强算法可以包括曝光度增强、去噪、边缘锐化、对比度增加或饱和度增加的至少一种,则不同类型的视频对应的曝光度增强、去噪、边缘锐化、对比度增加或饱和度增加的不同,例如,如表1所示:The video enhancement algorithm corresponding to the video file is determined according to the correspondence between the video type and the video enhancement algorithm. Specifically, the video enhancement algorithm may include at least one of exposure enhancement, denoising, edge sharpening, increased contrast, or increased saturation. Type, the exposure enhancement, denoising, edge sharpening, contrast increase, or saturation increase of the corresponding types of videos are different. For example, as shown in Table 1:
表1Table 1
根据表1所示的对应关系,就能够确定视频文件对应的视频增强算法。According to the corresponding relationship shown in Table 1, the video enhancement algorithm corresponding to the video file can be determined.
S304:将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区。S304: Send the optimized multi-frame image data to a frame buffer corresponding to the screen.
其中,帧缓冲区对应于屏幕,用于存放需要在屏幕上显示的数据,例如图2所示的Framebuffer,Framebuffer是出现在操作系统内核当中的一种驱动程序接口。以安卓系统为例,Linux是工作在保护模式下,所以用户态进程是无法像DOS系统那样,使用显 卡BIOS里提供的中断调用来实现直接将数据写入并在屏幕上显示,Linux抽象出Framebuffer这个设备来供用户进程实现直接将数据写入并在屏幕上显示。Framebuffer机制模仿显卡的功能,可以通过Framebuffer的读写直接对显存进行操作。具体地,可以将Framebuffer看成是显示内存的一个映像,将其映射到进程地址空间之后,就可以直接进行读写操作,而写入的数据可以在屏幕上显示。The frame buffer corresponds to the screen and is used to store data to be displayed on the screen, such as the Framebuffer shown in FIG. 2. The Framebuffer is a driver interface that appears in the operating system kernel. Taking the Android system as an example, Linux works in protected mode, so user mode processes cannot use interrupt calls provided in the graphics card BIOS to directly write data and display on the screen like DOS systems. Linux abstracts Framebuffer This device is used by the user process to directly write data and display it on the screen. The Framebuffer mechanism mimics the functions of a graphics card, and can directly operate on the video memory by reading and writing Framebuffer. Specifically, the framebuffer can be regarded as an image of the display memory, and after it is mapped to the process address space, read and write operations can be performed directly, and the written data can be displayed on the screen.
则帧缓冲区可以看作是一个存放数据的空间,CPU或者GPU将要显示的数据放入该帧缓冲区,而Framebuffer本身不具备任何运算数据的能力,由视频控制器按照屏幕刷新频率读取Framebuffer内的数据在屏幕上显示。The frame buffer can be regarded as a space for storing data. The CPU or GPU puts the data to be displayed in the frame buffer, and the Framebuffer itself does not have any ability to calculate data. The frame buffer is read by the video controller according to the screen refresh frequency. The data inside is displayed on the screen.
S305:由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。S305: Read the optimized multi-frame image data from the frame buffer and display it on the screen.
具体地,将优化后的多帧图像数据存入帧缓冲区内之后,图像处理器检测到帧缓冲区内写入数据之后,就由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。Specifically, after the optimized multi-frame image data is stored in the frame buffer, after the image processor detects the data written in the frame buffer, the optimized multi-frame image data is read from the frame buffer. And displayed on the screen.
作为一种实施方式,图像处理器会根据屏幕的刷新频率由所述帧缓冲区内逐帧读取优化后的多帧图像数据,并经渲染合成处理后在所述屏幕上显示。As an implementation manner, the image processor reads the optimized multi-frame image data frame by frame from the frame buffer according to the refresh frequency of the screen, and displays the optimized multi-frame image data on the screen after rendering and synthesis processing.
下面基于安卓系统的FBO机制描述本视频处理方法的具体实施方式,如图5所示,具体地,该方法为图3对应的方法中的S302至S305的进一步描述,则该方法包括:S501至S516。The specific implementation of the video processing method is described below based on the FBO mechanism of the Android system. As shown in FIG. 5, specifically, the method is a further description of S302 to S305 in the method corresponding to FIG. 3, and the method includes: S501 to S516.
S501:新建一个临时纹理并绑定到FBO。S501: Create a new temporary texture and bind it to the FBO.
其中,FBO可以看作为上述的离屏渲染缓冲区。Among them, FBO can be regarded as the above-mentioned off-screen rendering buffer.
在GPU的显存中顶点缓存、索引缓存、纹理缓存、模板缓存,而纹理缓存即为用于存放纹理数据的存储空间,而由于FBO没有真是的存储空间,则新建一个临时纹理,并将临时纹理绑定到FBO,则就可以实现临时纹理与FBO之间的映射关系,由于临时纹理作为一个变量,在显存内具有一定的存储空间,因此,FBO的实际存储空间就是临时纹理的存储空间。由此,就能够为FBO分配一定的显存。Vertex cache, index cache, texture cache, and template cache in GPU's video memory. The texture cache is the storage space for texture data. Since FBO does not have real storage space, a new temporary texture is created and the temporary texture is stored. Binding to FBO can realize the mapping relationship between temporary texture and FBO. Because temporary texture as a variable has a certain storage space in video memory, the actual storage space of FBO is the storage space of temporary texture. Therefore, a certain amount of video memory can be allocated for the FBO.
S502:将渲染对象绑定到FBO。S502: Bind the rendering object to the FBO.
其中,渲染对象就是视频文件对应的待渲染的多帧图像数据,具体地,可以通过渲染对象将多帧图像数据存储到FBO内,其中,渲染对象可以作为一个变量,将多帧图像数据赋给渲染对象,再将渲染对象于FBO绑定,就可以实现将视频文件对应的待渲染的多帧图像数据存储至离屏渲染缓冲区。例如,在FBO内设置一个句柄,该句柄指向多帧图像数据,则该句柄就可以是渲染对象。The rendering object is the multi-frame image data to be rendered corresponding to the video file. Specifically, the multi-frame image data can be stored in the FBO through the rendering object, where the rendering object can be used as a variable to assign the multi-frame image data to Rendering the object, and then binding the rendering object to the FBO, can realize storing the multi-frame image data corresponding to the video file to the off-screen rendering buffer. For example, if a handle is set in the FBO, and the handle points to multiple frames of image data, the handle can be a rendering object.
S503:清空FBO。S503: Clear the FBO.
在渲染之前,需要清空FBO内的旧数据,包括颜色缓存、深度缓存和模板缓存。需要说明的是,由于视频文件对应的待渲染的多帧图像数据是存储在渲染对象对应的存储空间内的,而多帧图像数据是通过映射的方式写入FBO,而非真实存储在FBO内,因此,清空FBO不会将多帧图像数据删除。Before rendering, you need to clear the old data in the FBO, including the color cache, depth cache, and template cache. It should be noted that, because the multi-frame image data to be rendered corresponding to the video file is stored in the storage space corresponding to the rendering object, the multi-frame image data is written to the FBO by mapping instead of being stored in the FBO. Therefore, clearing the FBO will not delete multi-frame image data.
S504:绑定HQV算法至Shader Program。S504: Bind the HQV algorithm to the Shader Program.
Shader为着色器的代码(包括顶点着色器、片段着色器等)。着色器程序(shader program):负责执行shader的引擎(程序)。用于执行前面shader代码指定的操作。Shader is the code of the shader (including vertex shader, fragment shader, etc.). Shader program: The engine (program) responsible for executing the shader. Used to perform the operation specified by the previous shader code.
其中,HQV算法即为上述的视频增强算法,将视频增强算法与Shader Program绑定,并在程序内定义如何执行该视频增强算法,即具体的算法的执行过程可以在Shader Program内写入对应的程序,以便GPU能够执行该视频增强算法。Among them, the HQV algorithm is the video enhancement algorithm described above. The video enhancement algorithm is bound to the Shader Program, and the program defines how to execute the video enhancement algorithm. That is, the specific algorithm execution process can be written into the Shader Program. Program so that the GPU can execute the video enhancement algorithm.
S505:判断是否首次执行优化。S505: Determine whether the optimization is performed for the first time.
具体地,针对该视频文件的每次优化都会被记录,例如,设置一个次数变量,每次优化的时候,为次数变量加1。判断是否是首次执行该优化操作,即是否首次使用视频 增强算法对视频文件的图像数据优化,如果是,则执行S506,若否,则执行S507。Specifically, each optimization for the video file is recorded. For example, a number of times variable is set, and each time the optimization is performed, 1 is added to the number of times variable. Determine whether it is the first time to perform the optimization operation, that is, whether to use the video enhancement algorithm to optimize the image data of the video file for the first time. If yes, execute S506, and if not, execute S507.
S506:绑定初始纹理。S506: Binding the initial texture.
S507:绑定临时纹理。S507: Binding a temporary texture.
除了设置了临时纹理,还设置了初始纹理,具体地,初始纹理作为将数据输入临时纹理的变量,而临时纹理的内容直接映射到FBO内。而初始纹理和临时纹理均作为数据存储的变量,具体地,将视频增强算法对应的特征数据写入数据纹理对象,其中,数据纹理对象即为该临时纹理。In addition to setting the temporary texture, an initial texture is also set. Specifically, the initial texture is used as a variable for inputting data into the temporary texture, and the content of the temporary texture is directly mapped into the FBO. The initial texture and the temporary texture are both used as data storage variables. Specifically, the feature data corresponding to the video enhancement algorithm is written into the data texture object, where the data texture object is the temporary texture.
由于,在首次执行优化的时候,临时纹理内未存储有任何数据,因为在初始化的时候,临时纹理被清空。Because when the optimization is performed for the first time, no data is stored in the temporary texture, because the temporary texture is emptied during initialization.
则在判定为首次执行优化的时候,将视频增强算法赋值给初始纹理,再由初始纹理将视频增强算法对应的特征数据传递给临时纹理,具体地,将初始纹理赋予临时纹理,其中,视频增强算法对应的特征数据即为视频增强算法的参数,例如,去噪中的中值滤波的各个参数值。When it is determined that the optimization is performed for the first time, the video enhancement algorithm is assigned to the initial texture, and the feature data corresponding to the video enhancement algorithm is passed to the temporary texture by the initial texture. Specifically, the initial texture is assigned to the temporary texture. The feature data corresponding to the algorithm are the parameters of the video enhancement algorithm, for example, the values of various parameters of the median filtering in denoising.
如果非首次优化,则临时纹理内存储有任何数据,则不需要从初始纹理内获取视频增强算法对应的特征数据,可以直接从临时纹理获取之前所存储的视频增强算法对应的特征数据。If it is not the first time optimization, if any data is stored in the temporary texture, you do not need to obtain the feature data corresponding to the video enhancement algorithm from the initial texture. You can directly obtain the feature data corresponding to the previously stored video enhancement algorithm from the temporary texture.
S508:卷积渲染。S508: Convolution rendering.
将视频增强算法对应的特征数据与待渲染的多帧图像数据卷积,以对待渲染的多帧图像数据优化,具体地,通过将所述渲染对象和数据纹理对象渲染,对所述离屏渲染缓冲区内的多帧图像数据进行优化。即执行渲染到纹理(Render To Texture,RTT)的操作。The feature data corresponding to the video enhancement algorithm is convolved with the multi-frame image data to be rendered to optimize the multi-frame image data to be rendered. Specifically, the off-screen rendering is performed by rendering the rendering object and the data texture object. Multi-frame image data in the buffer is optimized. That is, the operation of rendering to texture (RTT) is performed.
S509:是否需要迭代下一次优化。S509: Does it need to iterate the next optimization.
如果需要下一次迭代,则将次数变量加1,并返回执行S505,如果不需要迭代下一次优化,则继续执行S509。If the next iteration is required, the number of times variable is incremented by 1 and the process returns to S505. If the next iteration is not required, the process continues to S509.
S510:将渲染对象绑定到Framebuffer。S510: Bind the rendering object to the Framebuffer.
则此时的渲染对象已经被视频增强算法优化,即渲染对象为优化后的多帧图像数据。则将优化后的多帧图像数据发送至Framebuffer以存储。The rendering object at this time has been optimized by the video enhancement algorithm, that is, the rendering object is optimized multi-frame image data. The optimized multi-frame image data is sent to the Framebuffer for storage.
S511:清空Framebuffer。S511: Clear the framebuffer.
S512:绑定绘制纹理至Shader Program。S512: Bind drawing texture to Shader Program.
其中,绘制纹理为用于绘制图像的纹理,其存储有效果参数,具体地,用于增加在图像数据的效果,例如,阴影等。Among them, the drawing texture is a texture used to draw an image, and it stores effect parameters, specifically, used to increase the effect on the image data, such as shadows and the like.
S513:纹理渲染。S513: Texture rendering.
同上,执行渲染到纹理的操作,只是此步骤中的渲染对象为优化后的多帧图像数据,而纹理对象为绘制纹理。As above, the operation of rendering to texture is performed, except that the rendering object in this step is an optimized multi-frame image data, and the texture object is a drawing texture.
S514:是否需要绘制下一帧图像。S514: Whether to draw the next frame image.
在绘制一帧图像数据之后,如果,还需要绘制下一帧在返回执行S502,否则执行S515。After drawing one frame of image data, if it is necessary to draw the next frame, return to execute S502, otherwise execute S515.
S515:输出结果。S515: Output the result.
S516:回收数据。S516: Recycling data.
将渲染后的图像数据回收之后,控制屏幕将图像数据显示。After the rendered image data is recycled, the control screen displays the image data.
需要说明的是,上述步骤中未详细描述的部分,可参考前述实施例,在此不再赘述。It should be noted that, for the parts that are not described in detail in the foregoing steps, reference may be made to the foregoing embodiments, and details are not described herein again.
另外,考虑到采用视频增强算法对图像数据优化的时候,会导致视频播放的过程出现延时甚至卡顿的现象,因此,针对一些播放视频的客户端可以降低屏幕刷新率从而降低延时,具体地,请参阅图6,示出了本申请实施例提供的一种视频处理方法,该方法包括:S601至S607。In addition, considering that the video enhancement algorithm is used to optimize the image data, it will cause delay or even freeze in the video playback process. Therefore, for some clients playing video, you can reduce the screen refresh rate and reduce the delay. Please refer to FIG. 6, which illustrates a video processing method provided by an embodiment of the present application. The method includes: S601 to S607.
S601:获取客户端发送的视频播放请求,所述视频播放请求包括所述视频文件。S601: Obtain a video playback request sent by a client, where the video playback request includes the video file.
S602:若所述客户端满足预设标准,则降低所述屏幕的刷新频率。S602: If the client meets a preset criterion, reduce the refresh frequency of the screen.
在获取到视频播放请求之后,确定请求播放视频的客户端,从而获取该客户端的标识,具体地,客户端为安装在电子设备内的客户端,具有视频播放的功能。客户端在系统桌面设有图标,用户点击该客户端的图标,能够将该客户端打开,例如,从用户点击的应用的包名来确认,视频应用的包名可以系统后台从代码中获取,包名格式为:com.android.video。After the video playback request is obtained, the client requesting video playback is determined, so as to obtain the identity of the client. Specifically, the client is a client installed in an electronic device and has a video playback function. The client has an icon on the system desktop. The user can click the client's icon to open the client. For example, the client can confirm the package name of the application that the user clicks. The package name of the video application can be obtained from the code in the system background. The name format is: com.android.video.
判断客户端是否满足预设标准,如果满足,则降低屏幕的刷新频率,如果不满足,则不执行降低屏幕的刷新频率的操作。Determine whether the client meets the preset criteria. If so, the refresh rate of the screen is reduced. If not, the operation of reducing the refresh rate of the screen is not performed.
具体地,预设标准可以是用户根据实际使用需求而设定的标准,例如,可以是客户端的名称需要符合某个类别,也可以是客户端的安装时间需要位于预设时间段内,还可以是,客户端的开发商属于预设名单内,根据不同的应用场景,可以设置不同的预设标准。Specifically, the preset standard may be a standard set by a user according to actual usage requirements. For example, it may be that the name of the client needs to conform to a certain category, or the installation time of the client needs to be within the preset time period, or it may be The client's developer belongs to the preset list. According to different application scenarios, different preset standards can be set.
如果客户端满足预设标准,则表示该客户端播放的视频清晰度比较低或者视频文件大小比较低,不需要大概的屏幕刷新频率,则可以将屏幕的刷新频率降低。If the client meets the preset criteria, it means that the video played by the client has a lower definition or a lower video file size, and does not require an approximate screen refresh frequency, so the screen refresh frequency can be reduced.
作为一种实施方式,满足预设标准的客户端所对应的屏幕的刷新频率为预设频率,则电子设备获取当前的屏幕的刷新频率,如果当前的屏幕的刷新频率大于预设频率,则将当前的屏幕的刷新频率降低至预设频率,如果当前的屏幕的刷新频率小于或等于预设频率,则保持当前的屏幕的刷新频率不变。具体地,如果当前的屏幕的刷新频率等于预设频率,则保持当前的屏幕的刷新频率不变,如果当前的屏幕的刷新频率小于预设频率,则将当前的屏幕的刷新频率提高至预设频率。As an implementation manner, the refresh frequency of the screen corresponding to the client meeting the preset criteria is the preset frequency, the electronic device obtains the refresh frequency of the current screen, and if the refresh frequency of the current screen is greater than the preset frequency, the The refresh frequency of the current screen is reduced to a preset frequency. If the refresh frequency of the current screen is less than or equal to the preset frequency, the refresh frequency of the current screen is kept unchanged. Specifically, if the refresh frequency of the current screen is equal to the preset frequency, the refresh frequency of the current screen is kept unchanged, and if the refresh frequency of the current screen is less than the preset frequency, the refresh frequency of the current screen is increased to a preset frequency.
则如果客户端不满足预设标准,则将判断当前的屏幕的刷新频率与预设频率之间的大小关系,如果当前的屏幕的刷新频率小于默认频率,则将当前的屏幕的刷新频率提高至默认频率,其中,默认频率大于预设频率。If the client does not meet the preset criteria, it will determine the size relationship between the current screen refresh frequency and the preset frequency. If the current screen refresh frequency is less than the default frequency, the current screen refresh frequency will be increased to Default frequency, where the default frequency is greater than the preset frequency.
具体地,若所述客户端满足预设标准,则降低所述屏幕的刷新频率的具体实施方式为:获取所述客户端的身份标识;判断所述客户端的身份标识是否满足预设标识,如果满足,则降低所述屏幕的刷新频率。Specifically, if the client meets the preset criteria, the specific implementation of reducing the refresh frequency of the screen is: obtaining the identity of the client; determining whether the identity of the client meets the preset identity; , Then reduce the refresh frequency of the screen.
其中,客户端的身份信息可以是客户端的名称或者包名,在电子设备内预先存储预设标识,其中,预设标识中包括多个预设客户端的身份标识,该预设客户端所播放的视频文件较小或者清晰度较低,不需要太高的屏幕的刷新频率,从而通过降低刷新频率的方式可以减少电子设备的功耗。The identity information of the client may be the name of the client or the package name, and a preset identifier is stored in the electronic device in advance. The preset identifier includes the identity identifiers of multiple preset clients, and the video played by the preset client The file size is small or the resolution is low, and the refresh frequency of the screen is not required to be too high, thereby reducing the power consumption of the electronic device by reducing the refresh frequency.
作为另一种实施方式,若所述客户端满足预设标准,则降低所述屏幕的刷新频率的具体实施方式为:获取客户端的类别,判断所述客户端的类别是否为预设类别,如果是,则降低所述屏幕的刷新频率。As another implementation manner, if the client meets a preset criterion, a specific implementation manner of reducing the refresh frequency of the screen is: obtaining a category of the client, and determining whether the category of the client is a preset category, and if it is , Then reduce the refresh frequency of the screen.
其中,预设类别可以是用户根据需求而设定的类别,例如,可以是自媒体视频类客户端。其中,自媒体视频类客户端相比用于播放电影的客户端或者游戏类客户端,所播放的视频的文件较小或者清晰度较低,则需要对客户端是否为视频类客户端判定。The preset category may be a category set by a user according to requirements, for example, it may be a self-media video client. The self-media video client has a smaller file size or lower resolution than a client for playing movies or a client for games, and it is necessary to determine whether the client is a video client.
具体地,在获取到客户端的标识之后,根据该标识确定客户端的类型,其中,客户端的标识可以是客户端的包名、名称等。例如,电子设备内预先存储有客户端的标识和客户端的类别的对应关系,如下表2所示:Specifically, after the identity of the client is obtained, the type of the client is determined according to the identity, where the identity of the client may be the package name, name, etc. of the client. For example, the correspondence between the identification of the client and the type of the client is stored in the electronic device in advance, as shown in Table 2 below:
表2Table 2
客户端的标识Client ID | 客户端的类别Client category |
Apk1Apk1 | 游戏game |
Apk2Apk2 | 视频video |
Apk3Apk3 | 音频Audio |
由此,根据上述表2所示的客户端的标识和和客户端的类别的对应关系,就能够确定视频文件所对应的客户端的类别。Therefore, according to the identifier of the client shown in Table 2 above and the correspondence relationship with the type of the client, the type of the client corresponding to the video file can be determined.
作为一种实施方式,上述客户端的类别,可以是客户端的开发商在开放的时候为客户端设定的类别,也可以是客户端在安装在电子设备上之后,用户为客户端设定的类别,例如,用户在电子设备上安装某个客户端,在安装完成并进入该客户端之后,会显示一个对话框,指示用户为客户端设定类别。则客户端具体属于哪个类别,可以由用户根据需求而设定,例如,用户可以将某社交软件设置为音频类,或者设置为视频类,或者设置为社交类。As an implementation manner, the category of the client may be a category set by the developer of the client for the client when it is opened, or a category set by the user for the client after the client is installed on the electronic device. For example, when a user installs a client on an electronic device, after the installation is completed and the client is entered, a dialog box is displayed instructing the user to set a category for the client. The specific category to which the client belongs can be set by the user according to requirements. For example, the user can set a social software as an audio category, or a video category, or a social category.
另外,电子设备内安装有客户端安装软件。则在该客户端安装软件内设置有客户端列表,在该列表内用户能够下载客户端并且能够更新和打开客户端,而且该客户端安装软件可以将不同的客户端按照类别显示,比如,音频类、视频类或者游戏类等。因此,用户在使用该客户端安装软件安装客户端的时候,就已经能够知道该客户端的类别。In addition, client installation software is installed in the electronic device. A client list is set in the client installation software, in which the user can download the client and can update and open the client, and the client installation software can display different clients according to categories, such as audio Category, video category, or game category. Therefore, when the user uses the client installation software to install the client, the user can already know the category of the client.
再者,如果有些客户端能够播放视频,也能够播放音频,则如果该客户端支持视频播放的功能,就将该客户端的类型设置为视频类型,如果不支持视频播放的功能,而仅仅支持音频播放的功能,则就将该客户端的类型设置为音频类型。而具体地,客户端是否支持视频播放功能,可以通过该客户端的功能描述信息中,所包含的功能描述,例如,所支持的播放格式来判断是否支持视频格式的播放,也可以通过检测该客户端的程序模块内是否播放视频播放模块,例如,某个视频播放的编解码算法等,从而能够确定该客户端是否支持视频播放功能。Furthermore, if some clients can play video and audio, if the client supports video playback, set the client type to video type. If it does not support video playback, only audio For the playback function, the client type is set to the audio type. Specifically, whether the client supports the video playback function can be determined by the function description information contained in the client's function description information, for example, the supported playback formats to determine whether the video format playback is supported, or by detecting the client Whether a video playback module is played in the program module of the client, for example, a codec algorithm for playing a video, etc., can determine whether the client supports the video playback function.
作为另一种实施方式,如果有些客户端能够播放视频,也能够播放音频,例如一些视频播放软件,可以播放纯音频文件,也可以播放视频,则该客户端的类别可以根据客户端的使用记录而确定,即根据该客户端的一定时间段内的使用记录,确定用户使用该客户端是倾向于播放视频还是更倾向于播放音频。As another implementation manner, if some clients can play videos and also play audio, for example, some video playback software can play pure audio files or videos, the category of the client can be determined according to the client ’s usage history That is, according to the usage record of the client within a certain period of time, it is determined whether the user tends to play video or audio.
具体地,获取该客户端在预设时间段内的所有用户的操作行为数据,其中,所有用户是指安装过该客户端的所有用户,则该操作行为数据可以由客户端对应的服务器内获取,也就是说,用户在使用该客户端的时候会使用用户对应的用户账号登录该客户端,而用户账号对应的操作行为数据会发送至客户端对应的服务器,则服务器将所获取的操作行为数据与用户账号对应存储。在一些实施例中,电子设备发送针对客户端的操作行为查询请求发送至该客户端对应的服务器,服务器将一定预设时间段内的所有用户的操作行为数据发送至电子设备。Specifically, the operation behavior data of all users of the client within a preset period of time is obtained, where all users refer to all users who have installed the client, then the operation behavior data may be obtained from a server corresponding to the client, In other words, when the user uses the client, the user logs in to the client using the user account corresponding to the user, and the operation behavior data corresponding to the user account is sent to the server corresponding to the client, and the server compares the obtained operation behavior data with the User account corresponding storage. In some embodiments, the electronic device sends an operation behavior query request for the client to a server corresponding to the client, and the server sends the operation behavior data of all users within a preset time period to the electronic device.
该操作行为数据包括所播放的音频文件的名称和时间、以及所播放的视频文件的名称和时间,通过分析该操作行为数据就能够确定在一定预设时间段内该客户端播放的音频文件的数量以及总的时间,也可以得到该客户端播放的视频文件的数量以及总的时间,则根据音频和视频文件的播放总时长在该预定时间段内的占比,确定客户端的类别,具体地,获取音频和视频文件的播放总时长在该预定时间段内的占比,为方便描述,将音频文件的播放总时长在该预定时间段内的占比记为音频播放占比,将视频文件的播放总时长在该预定时间段内的占比记为视频播放占比,如果视频播放占比大于音频播放占比,则将客户端的类别设定为视频类型,如果音频播放占比大于视频播放占比,则将客户端的类别设定为音频类型。例如,预设时间段为30天,即720小时,而音频文件的播放总时长为200小时,则音频播放占比为27.8%,视频文件的播放总时长为330小时,则视频播放占比为45.8%,则视频播放占比大于音频播放占比,则将客户端的类别设定为视频类型。The operation behavior data includes the name and time of the audio file being played and the name and time of the video file being played. By analyzing the operation behavior data, it is possible to determine the audio file played by the client within a certain preset time period. The number and total time can also be obtained by the number of video files played by the client and the total time, and then the type of the client is determined according to the proportion of the total playing time of the audio and video files in the predetermined time period, specifically To obtain the proportion of the total playing time of the audio and video files in the predetermined time period. For the convenience of description, the proportion of the total playing time of the audio file in the predetermined time period is recorded as the proportion of the audio playing, and the video file is recorded. The proportion of the total playback time in the predetermined time period is recorded as the proportion of video playback. If the proportion of video playback is greater than the proportion of audio playback, the client type is set to the video type. If the proportion of audio playback is greater than video playback Ratio, the client's category is set to audio type. For example, if the preset time period is 30 days, or 720 hours, and the total playback time of the audio file is 200 hours, the audio playback ratio is 27.8%, and the total playback time of the video file is 330 hours, and the video playback ratio is 45.8%, the video playback proportion is greater than the audio playback proportion, then the client type is set to the video type.
在另一些实施例中,电子设备发送针对客户端的类别查询请求至服务器,服务器根 据预先获取的客户端所对应的操作行为数据确定上述的音频播放占比和视频播放占比,并根据音频播放占比和视频播放占比之间的大小关系而确定客户端的类别,具体地,可参考前述描述。In other embodiments, the electronic device sends a category query request for the client to the server, and the server determines the foregoing audio playback proportion and video playback proportion according to the operation behavior data corresponding to the client obtained in advance, and according to the audio playback proportion The type of the client is determined by the size relationship between the ratio and the video playback ratio. Specifically, reference may be made to the foregoing description.
由此,通过客户端的播放数据的记录,就能够确定客户端大部分时间所播放的视频的清晰度和类型,由此就可以确定该客户端是否为自媒体视频类客户端,如果是,就判定所述客户端的身份标识满足预设标识。Therefore, by recording the playback data of the client, the clarity and type of the video played by the client most of the time can be determined, and thus it can be determined whether the client is a self-media video client. If so, then It is determined that the identity of the client meets a preset identity.
S603:获取视频文件对应的待渲染的多帧图像数据。S603: Obtain multi-frame image data to be rendered corresponding to the video file.
S604:将所述多帧图像数据存储至离屏渲染缓冲区。S604: Store the multi-frame image data in an off-screen rendering buffer.
S605:根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化。S605: Optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
S606:将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区。S606: Send the optimized multi-frame image data to a frame buffer corresponding to the screen.
S607:基于所述屏幕的刷新频率由所述帧缓冲区内逐帧读取优化后的多帧图像数据,并经渲染合成处理后在所述屏幕上显示。S607: Read the optimized multi-frame image data frame by frame in the frame buffer based on the refresh frequency of the screen, and display it on the screen after rendering and synthesis processing.
则视频播放的时候,GPU内的视频控制器按照屏幕的刷新频率由所述帧缓冲区内逐帧读取优化后的多帧图像数据,并经渲染合成处理后在所述屏幕上显示,其中,屏幕的刷新频率可以看作是一个时钟信号,每当一个时钟信号到来的时候,就由帧缓冲区内逐帧读取优化后的多帧图像数据,并经渲染合成处理后在所述屏幕上显示。When the video is playing, the video controller in the GPU reads the optimized multi-frame image data frame by frame from the frame buffer according to the screen refresh frequency, and displays it on the screen after rendering and synthesis processing, where The refresh frequency of the screen can be regarded as a clock signal. Whenever a clock signal arrives, the optimized multi-frame image data is read frame by frame in the frame buffer, and is rendered on the screen after rendering and synthesis processing. On display.
因此,采用离屏渲染的方式,而非在屏渲染的方式,能够避免如果通过在屏渲染在帧缓冲区对图像数据优化,会导致可能数据还未优化的时候就已经被视频控制器按照屏幕的刷新频率由所述帧缓冲区内取出而在屏幕上显示。Therefore, using off-screen rendering instead of on-screen rendering can avoid optimizing the image data in the frame buffer by on-screen rendering, which may result in the video data being screened by the video controller before it is optimized The refresh frequency is taken out from the frame buffer and displayed on the screen.
需要说明的是,上述的S601和S602的步骤不限定在S603之前执行,也可以是在S607之后执行,即可以先按照当前的屏幕的刷新频率对视频播放,而后再对当前的屏幕的刷新频率调整。另外,上述步骤中未详细描述的部分,可参考前述实施例,在此不再赘述。It should be noted that the above steps S601 and S602 are not limited to be performed before S603, but may be performed after S607, that is, the video can be played according to the current screen refresh frequency, and then the current screen refresh frequency Adjustment. In addition, for the parts that are not described in detail in the foregoing steps, reference may be made to the foregoing embodiments, and details are not described herein again.
请参阅图7,示出了本申请实施例提供的一种视频处理方法,该方法包括:S701至S706。Please refer to FIG. 7, which illustrates a video processing method provided by an embodiment of the present application. The method includes: S701 to S706.
S701:获取视频文件对应的待渲染的多帧图像数据。S701: Obtain multi-frame image data to be rendered corresponding to a video file.
S702:判断所述视频文件是否满足预设条件。S702: Determine whether the video file meets a preset condition.
其中,预设条件为用户根据实际使用而设定的条件,例如,可以是获取视频文件的类别,如果该视频文件的类别为预设类别,则判定该视频文件满足预设条件,具体地,视频文件的类别的确定方式可以参考前述实施例。The preset condition is a condition set by the user according to actual use. For example, the preset condition may be a category of obtaining a video file. If the category of the video file is a preset category, it is determined that the video file meets the preset condition. Specifically, For a manner of determining the category of the video file, refer to the foregoing embodiment.
另外,还可以是确定视频文件的实时性,因为,采用本申请的方法对视频文件做视频增强的优化处理,由于在帧缓冲区之外新设定一个缓冲区,能够避免视频未增强好就被显示在屏幕上,该过程对视频文件播放的实时性具有一定的要求,因此可以根据实时性确定是否要执行视频增强算法,具体地,确定所述视频文件对应的实时性级别,判断所述视频文件的实时性级别是否属于满足预设级别,如果满足,则执行S703,否则,结束本方法。In addition, it is also possible to determine the real-time nature of the video file, because the method of the present application is used to optimize the video file for video enhancement. Since a new buffer is set outside the frame buffer, the video can be avoided if it is not enhanced. It is displayed on the screen. This process has certain requirements for the real-time performance of the video file playback. Therefore, it is possible to determine whether to perform a video enhancement algorithm according to the real-time performance. Specifically, determine the real-time performance level corresponding to the video file, Whether the real-time level of the video file satisfies a preset level. If so, S703 is performed; otherwise, the method ends.
具体地,若接收到视频文件的播放请求,则确定视频文件的实时性级别。作为一种实施方式,确定视频文件对应的客户端的标识,再根据该客户端的标识确定视频文件的实时性级别。具体地,确定发送该视频文件的播放请求的客户端的标识,在确定该客户端的标识所对应的客户端的类型,具体地,可以参考上述的实施例。Specifically, if a playback request for a video file is received, a real-time level of the video file is determined. As an implementation manner, the identifier of the client corresponding to the video file is determined, and then the real-time level of the video file is determined according to the identifier of the client. Specifically, the identifier of the client that sends the playback request of the video file is determined, and the type of the client corresponding to the identifier of the client is determined. Specifically, reference may be made to the foregoing embodiment.
然后,再根据该客户端的类型确定视频文件对应的实时性级别。具体地,电子设备内存储有客户端的类型所对应实时性级别,如下表3所示:Then, the real-time level corresponding to the video file is determined according to the type of the client. Specifically, the real-time level corresponding to the type of the client is stored in the electronic device, as shown in Table 3 below:
表3table 3
客户端的标识Client ID | 客户端的类别Client category | 实时性级别Real-time level |
Apk1Apk1 | 游戏game | J1J1 |
Apk2Apk2 | 视频video | J2J2 |
Apk3Apk3 | 音频Audio | J3J3 |
Apk4Apk4 | 社交Social | J1J1 |
根据上述的对应关系,就能够确定视频文件所对应的实时性级别。例如,视频文件对应的客户端的标识为Apk4,则所对应的类别为社交,所对应的实时性级别为J1。其中,J1的级别最高,其次,J2、J3依次减小。According to the above-mentioned correspondence relationship, the real-time level corresponding to the video file can be determined. For example, if the identifier of the client corresponding to the video file is Apk4, the corresponding category is social, and the corresponding real-time level is J1. Among them, J1 ranks highest, followed by J2 and J3.
然后,再判断所述视频文件的实时性级别是否属于满足预设级别。Then, it is determined whether the real-time level of the video file belongs to a preset level.
其中,预设级别为预先设定的需要视频增强算法对应的实时性级别,可以是用户根据需求而设定的。例如,预设级别为J2及以下。则如果视频文件对应的实时性级别为J3,则视频文件的实时性级别满足预设级别,也就是说,针对实时性要求比较高的视频文件,可以不执行视频增强算法,避免视频增强导致视频播放的延时,而影响用户体验。The preset level is a preset real-time level corresponding to the required video enhancement algorithm, and may be set by a user according to requirements. For example, the preset level is J2 and below. If the real-time level corresponding to the video file is J3, the real-time level of the video file meets the preset level, that is, for video files with high real-time requirements, the video enhancement algorithm may not be executed to avoid video enhancement leading to video The delay of playback affects the user experience.
S703:将所述多帧图像数据存储至离屏渲染缓冲区。S703: Store the multi-frame image data in an off-screen rendering buffer.
其中,具体的实施方式可以参考前述实施例。For specific implementations, refer to the foregoing embodiments.
进一步地,还可以增加根据观看视频的用户来确定是否需要将所述多帧图像数据存储至离屏渲染缓冲区的操作。Further, an operation for determining whether the multi-frame image data needs to be stored in an off-screen rendering buffer may be added according to a user watching a video.
具体地,电子设备设置有摄像头,该摄像头与屏幕设置在电子设备的同一面,则获取摄像头采集的人物图像,判断所述人物图像是否满足预设人物标准,如果满足,则将所述多帧图像数据存储至离屏渲染缓冲区。在一些实施例中,判断所述人物图像是否满足预设人物标准的操作可以代替上述步骤S702,在另一些实施例中,判断所述人物图像是否满足预设人物标准的操作可以与上述步骤S702结合,例如,判断所述人物图像是否满足预设人物标准,如果满足预设人物标准,则判断所述视频文件是否满足预设条件,如果满足预设条件,则将所述多帧图像数据存储至离屏渲染缓冲区。或者,先判断所述视频文件是否满足预设条件,如果满足预设条件,则再判断所述人物图像是否满足预设人物标准,如果满足预设人物标准,则将所述多帧图像数据存储至离屏渲染缓冲区。Specifically, the electronic device is provided with a camera, and the camera and the screen are disposed on the same side of the electronic device. Then, the person images collected by the camera are acquired, and the person images are determined to meet the preset person standards. The image data is stored in the off-screen rendering buffer. In some embodiments, the operation of determining whether the character image meets the preset character standard may replace step S702. In other embodiments, the operation of determining whether the character image meets the preset character standard may be the same as the above step S702. In combination, for example, it is determined whether the character image meets a preset character standard, if the preset character standard is met, whether the video file meets a preset condition, and if the preset condition is met, the multi-frame image data is stored To the off-screen rendering buffer. Alternatively, first determine whether the video file satisfies a preset condition, and if the preset condition is met, then determine whether the character image meets a preset character standard, and if the preset character standard is met, store the multi-frame image data To the off-screen rendering buffer.
其中,判断所述人物图像是否满足预设人物标准的具体实施方式可以是:The specific implementation manner for determining whether the character image meets a preset character standard may be:
在一些实施例中,可以提取人物图像内的人脸图像,确定所述人脸图像对应的身份信息,再判断身份信息与预设身份信息是否匹配,如果匹配,则判定人物图像满足预设人物标准。其中,预设身份信息为预先存储的身份信息,而身份信息为用于区分不同的用户的标识。具体地,对人脸图像分析以获取特征信息,其中,特征信息可以是五官或者脸型轮廓等,基于该特征信息确定身份信息。In some embodiments, a face image in a person image may be extracted, the identity information corresponding to the face image may be determined, and then the identity information is matched with the preset identity information. If the identity information matches, it is determined that the person image meets the preset person standard. The preset identity information is pre-stored identity information, and the identity information is an identifier for distinguishing different users. Specifically, the face image is analyzed to obtain feature information, where the feature information may be facial features or facial contours, etc., and identity information is determined based on the feature information.
在另一些实施例中,还可以是基于所述人脸图像确定用户的年龄阶段,具体地,对所获取的人脸图像信息进行人脸识别,识别出当前用户的脸部特征,系统将人脸图像进行预处理,即在图像中准确标定出人脸的位置,检测出包括人脸的轮廓,肤色,纹理,质地,色彩特征,根据不同的模式特征如直方图特征、颜色特征、模板特征、结构特征及Haar特征等把上述脸部特征中有用的信息挑出来,分析出当前用户的年龄阶段。例如,使用视觉特征、像素统计特征、人脸图像变换系数特征、人脸图像代数特征等,基于知识的表征方法或基于代数特征或统计学习的表征方法针对人脸的某些特征进行特征建模,并根据特征判断当前使用移动终端用户所属年龄类别。In other embodiments, the age stage of the user may also be determined based on the face image. Specifically, face recognition is performed on the obtained face image information to identify the facial features of the current user. The face image is preprocessed, that is, the position of the face is accurately marked in the image, and the contour, skin color, texture, texture, and color characteristics of the face are detected, and according to different pattern features such as histogram features, color features, and template features , Structural features, Haar features, etc. pick out the useful information in the above facial features and analyze the age of the current user. For example, using visual features, pixel statistical features, face image transformation coefficient features, face image algebraic features, etc., knowledge-based representation methods or algebraic features or statistical learning-based representation methods are used to model features of certain faces. , And judge the age category to which the current mobile terminal user belongs according to the characteristics.
其中,年龄阶段可以包括儿童阶段、少年阶段、青年阶段、中年阶段和老年阶段等,也可以是从10岁开始,每10岁划分一个年龄段,还可以是,指划分两个年龄段,即老年阶段和非老年阶段。而每个年龄阶段对视频增强的要求可能不同,例如,老年阶段对视频的显示效果需要不高。Among them, the age stage can include children's stage, adolescent stage, youth stage, middle-aged stage and old age stage, etc., or it can start from the age of 10, and every 10 years old is divided into one age group, or it can be divided into two age groups. That is, the senile and non-senile stages. The requirements for video enhancement may be different at each age stage, for example, the display effect of the video at the age stage is not high.
在确定用户的年龄阶段之后,确定该年龄阶段是否属于预设阶段范围内,如果是, 则执行将所述多帧图像数据存储至离屏渲染缓冲区以及后续的视频增强算法操作,如果不是,则结束本次方法。其中,预设阶段范围可以是青年阶段和中年阶段,即针对儿童阶段、少年阶段和老年阶段可以不需要对视频做增强处理。After determining the user's age stage, determine whether the age stage falls within the preset stage range, and if so, perform the operation of storing the multi-frame image data in an off-screen rendering buffer and subsequent video enhancement algorithm operations. This method ends. Among them, the scope of the preset stage may be the youth stage and the middle-aged stage, that is, the child stage, the juvenile stage, and the senior stage may not require enhanced processing of the video.
S704:根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化。S704: Optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
S705:将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区。S705: Send the optimized multi-frame image data to a frame buffer corresponding to the screen.
S706:由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。S706: Read the optimized multi-frame image data from the frame buffer and display it on the screen.
则如图8所示,在GPU内增加了HQV算法模块,该HQV算法模块为用户执行本视频处理方法的模块,与图2相比,软解码之后将待渲染的图像数据发送至SurfaceFlinger的时候,被HQV算法模块拦截并优化之后再发送给SurfaceFlinger做渲染以及后续在屏幕上的显示操作。As shown in Figure 8, an HQV algorithm module is added to the GPU. This HQV algorithm module is a module for users to execute this video processing method. Compared with Figure 2, when the image data to be rendered is sent to SurfaceFlinger after soft decoding. After being intercepted and optimized by the HQV algorithm module, it is sent to SurfaceFlinger for rendering and subsequent display operations on the screen.
另外,上述步骤中未详细描述的部分,可参考前述实施例,在此不再赘述。In addition, for the parts that are not described in detail in the foregoing steps, reference may be made to the foregoing embodiments, and details are not described herein again.
请参阅图9,其示出了本申请实施例提供的一种视频处理装置800的结构框图,该装置可以包括:获取单元901、第一存储单元902、优化单元903、第二存储单元904和显示单元905。Please refer to FIG. 9, which shows a structural block diagram of a video processing apparatus 800 according to an embodiment of the present application. The apparatus may include: an obtaining unit 901, a first storage unit 902, an optimization unit 903, a second storage unit 904 and Display unit 905.
获取单元901,用于获取视频文件对应的待渲染的多帧图像数据。The obtaining unit 901 is configured to obtain multi-frame image data to be rendered corresponding to a video file.
第一存储单元902,用于将所述多帧图像数据存储至离屏渲染缓冲区。The first storage unit 902 is configured to store the multi-frame image data in an off-screen rendering buffer.
优化单元903,用于根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化。The optimization unit 903 is configured to optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm.
第二存储单元904,用于将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区。The second storage unit 904 is configured to send the optimized multi-frame image data to a frame buffer corresponding to the screen.
显示单元905,用于由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。The display unit 905 is configured to read the optimized multi-frame image data from the frame buffer and display the optimized multi-frame image data on the screen.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of the description, the specific working processes of the devices and modules described above may refer to the corresponding processes in the foregoing method embodiments, and are not repeated here.
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。In the several embodiments provided in this application, the coupling between the modules may be electrical, mechanical, or other forms of coupling.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist separately physically, or two or more modules may be integrated into one module. The above integrated modules may be implemented in the form of hardware or software functional modules.
请参考图10,其示出了本申请实施例提供的一种电子设备的结构框图。该电子设备100可以是智能手机、平板电脑、电子书等能够运行客户端的电子设备。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120、屏幕140以及一个或多个客户端,其中一个或多个客户端可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。Please refer to FIG. 10, which is a structural block diagram of an electronic device according to an embodiment of the present application. The electronic device 100 may be an electronic device capable of running a client, such as a smart phone, a tablet computer, or an e-book. The electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, a screen 140, and one or more clients, where one or more clients may be stored in the memory 120 and configured For execution by one or more processors 110, one or more programs are configured to perform the method as described in the foregoing method embodiment.
处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。The processor 110 may include one or more processing cores. The processor 110 uses various interfaces and lines to connect various parts in the entire electronic device 100, and executes or executes instructions, programs, code sets, or instruction sets stored in the memory 120 by calling or executing data stored in the memory 120 to execute Various functions and processing data of the electronic device 100. Optionally, the processor 110 may use at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). To implement a hardware form.
具体地,处理器110可以包括中央处理器111(Central Processing Unit,CPU)、图像处理器112(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和客户端等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不 集成到处理器110中,单独通过一块通信芯片进行实现。Specifically, the processor 110 may include one or a combination of a central processing unit 111 (Central Processing Unit, CPU), an image processor 112 (Graphics Processing Unit, GPU), and a modem. Among them, the CPU mainly handles the operating system, user interface, and client; the GPU is responsible for rendering and rendering of the displayed content; the modem is used for wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, and may be implemented by a communication chip alone.
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。The memory 120 may include Random Access Memory (RAM), and may also include Read-Only Memory. The memory 120 may be used to store instructions, programs, codes, code sets, or instruction sets. The memory 120 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , Instructions for implementing the following method embodiments, and the like. The storage data area may also store data (such as phonebook, audio and video data, and chat history data) created by the terminal 100 during use.
所述屏幕120用于显示由用户输入的信息、提供给用户的信息以及电子设备的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、数字、视频和其任意组合来构成,在一个实例中,触摸屏可设置于所述显示面板上从而与所述显示面板构成一个整体。The screen 120 is used to display information input by the user, information provided to the user, and various graphical user interfaces of the electronic device. These graphical user interfaces may be composed of graphics, text, icons, numbers, videos, and any combination thereof. In one example, a touch screen may be disposed on the display panel so as to be integrated with the display panel.
请参考图11,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质1100中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。Please refer to FIG. 11, which shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application. The computer-readable medium 1100 stores program code, and the program code can be called by a processor to execute a method described in the foregoing method embodiment.
计算机可读存储介质1100可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质1100包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质1100具有执行上述方法中的任何方法步骤的程序代码1111的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码1111可以例如以适当形式进行压缩。The computer-readable storage medium 1100 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 1100 includes a non-transitory computer-readable storage medium. The computer-readable storage medium 1100 has a storage space of a program code 1111 for performing any of the method steps in the above method. These program codes can be read from or written into one or more computer program products. The program code 1111 may be compressed in a suitable form, for example.
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to describe the technical solution of the present application, rather than limiting it. Although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that they can still Modifications to the technical solutions described in the foregoing embodiments, or equivalent replacements of some of the technical features thereof; and these modifications or replacements do not drive the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (20)
- 一种视频处理方法,其特征在于,应用于电子设备的图像处理器,所述电子设备还包括屏幕,所述方法包括:A video processing method, characterized in that it is applied to an image processor of an electronic device, the electronic device further includes a screen, and the method includes:获取视频文件对应的待渲染的多帧图像数据;Obtaining multi-frame image data corresponding to a video file to be rendered;将所述多帧图像数据存储至离屏渲染缓冲区;Storing the multi-frame image data in an off-screen rendering buffer;根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化;Optimizing multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm;将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区;Sending the optimized multi-frame image data to a frame buffer corresponding to the screen;由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。The optimized multi-frame image data is read from the frame buffer and displayed on the screen.
- 根据权利要求1所述的方法,其特征在于,所述根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化,包括:The method according to claim 1, wherein the optimizing the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm comprises:对所述离屏渲染缓冲区内的多帧图像数据的图像参数优化,其中,所述图像参数优化包括曝光度增强、去噪、边缘锐化、对比度增加或饱和度增加的至少一种。The image parameter optimization of the multi-frame image data in the off-screen rendering buffer is performed, wherein the image parameter optimization includes at least one of enhancement of exposure, denoising, edge sharpening, increase of contrast, or increase of saturation.
- 根据权利要求2所述的方法,其特征在于,当所述图像参数优化包括曝光度增强时,所述对所述离屏渲染缓冲区内的多帧图像数据的图像参数优化,包括:The method according to claim 2, characterized in that when the image parameter optimization includes an exposure enhancement, the optimization of the image parameters of the multi-frame image data in the off-screen rendering buffer comprises:确定所述离屏渲染缓冲区内的每帧图像数据的低亮度值的区域;Determining an area of low brightness value of each frame of image data in the off-screen rendering buffer;增加所述图像数据的低亮度值的区域的亮度值。The brightness value of a region with a low brightness value of the image data is increased.
- 根据权利要求2所述的方法,其特征在于,当所述图像参数优化包括去噪时,所述对所述离屏渲染缓冲区内的多帧图像数据的图像参数优化,包括:The method according to claim 2, wherein when the optimization of the image parameters includes denoising, the optimization of the image parameters of the multi-frame image data in the off-screen rendering buffer comprises:通过高斯滤波器对所述离屏渲染缓冲区内的多帧图像数据去噪。Denoise the multi-frame image data in the off-screen rendering buffer by a Gaussian filter.
- 根据权利要求1-4任一所述的方法,其特征在于,所述根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化之前,还包括:The method according to any one of claims 1-4, wherein before optimizing the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm, the method further comprises:获取所述视频文件对应的视频类型;Obtaining a video type corresponding to the video file;基于所述视频类型确定视频增强算法。A video enhancement algorithm is determined based on the video type.
- 根据权利要求5所述的方法,其特征在于,所述获取所述视频文件对应的视频类型,包括:The method according to claim 5, wherein the obtaining a video type corresponding to the video file comprises:确定所述视频文件中每帧图像内的所有对象的类别;Determining the categories of all objects in each frame of the video file;根据每帧所述图像内的各个类别的对象在所有对象内的占比,确定每帧所述图像的类别;Determining the category of the image in each frame according to the proportion of objects of each category in the image in each frame in all the objects;根据所述视频文件中每帧所述图像的类别确定所述视频文件对应的视频类型。A video type corresponding to the video file is determined according to a category of the image in each frame in the video file.
- 根据权利要求1-6任一所述的方法,其特征在于,所述将所述多帧图像数据存储至离屏渲染缓冲区;根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化,包括:The method according to any one of claims 1 to 6, characterized in that said storing the multi-frame image data to an off-screen rendering buffer; Multi-frame image data is optimized, including:将所述多帧图像数据写入渲染对象;Writing the multi-frame image data into a rendering object;将视频增强算法对应的特征数据写入数据纹理对象,其中,渲染对象和数据纹理对象与所述离屏渲染缓冲区绑定;Writing feature data corresponding to the video enhancement algorithm into a data texture object, wherein the rendering object and the data texture object are bound to the off-screen rendering buffer;通过将所述渲染对象和数据纹理对象渲染,对所述离屏渲染缓冲区内的多帧图像数据进行优化。By rendering the rendering object and the data texture object, the multi-frame image data in the off-screen rendering buffer is optimized.
- 根据权利要求1-7任一所述的方法,其特征在于,所述获取视频文件对应的待渲染的多帧图像数据之前,还包括:The method according to any one of claims 1-7, wherein before the acquiring multi-frame image data to be rendered corresponding to a video file, further comprising:中央处理器获取待播放的视频文件,并根据软解码算法对所述视频文件处理,以获取到所述视频文件对应的多帧图像数据。The central processing unit obtains a video file to be played, and processes the video file according to a soft decoding algorithm to obtain multi-frame image data corresponding to the video file.
- 根据权利要求1-8任一所述的方法,其特征在于,所述由所述帧缓冲区内逐帧读取优化后的多帧图像数据,并经渲染合成处理后在所述屏幕上显示,包括:The method according to any one of claims 1 to 8, wherein the optimized multi-frame image data is read frame by frame in the frame buffer, and displayed on the screen after rendering and synthesis processing. ,include:基于所述屏幕的刷新频率由所述帧缓冲区内逐帧读取优化后的多帧图像数据,并 经渲染合成处理后在所述屏幕上显示。Based on the refresh frequency of the screen, the optimized multi-frame image data is read frame by frame in the frame buffer, and displayed on the screen after rendering and synthesis processing.
- 根据权利要求9所述的方法,其特征在于,还包括:The method according to claim 9, further comprising:获取客户端发送的视频播放请求,所述视频播放请求包括所述视频文件;Obtaining a video playback request sent by a client, where the video playback request includes the video file;若所述客户端满足预设标准,则降低所述屏幕的刷新频率。If the client meets a preset criterion, the refresh frequency of the screen is reduced.
- 根据权利要求10所述的方法,其特征在于,所述若所述客户端满足预设标准,则降低所述屏幕的刷新频率,包括:The method according to claim 10, wherein if the client meets a preset criterion, reducing the refresh frequency of the screen comprises:获取所述客户端的身份标识;Obtaining the identity of the client;判断所述客户端的身份标识是否满足预设标识;Determining whether the identity of the client meets a preset identity;如果满足,则降低所述屏幕的刷新频率。If so, reduce the refresh frequency of the screen.
- 根据权利要求10所述的方法,其特征在于,所述若所述客户端满足预设标准,则降低所述屏幕的刷新频率,包括:The method according to claim 10, wherein if the client meets a preset criterion, reducing the refresh frequency of the screen comprises:获取所述客户端的类别;Obtaining a category of the client;判断所述客户端的类别是否为预设类别;Determining whether the category of the client is a preset category;如果是,则降低所述屏幕的刷新频率。If so, reduce the refresh frequency of the screen.
- 根据权利要求12所述的方法,其特征在于,所述获取所述客户端的类别,包括:The method according to claim 12, wherein the acquiring the category of the client comprises:若所述客户端支持视频文件播放,也支持音频文件播放,则获取所述客户端在预设时间段内的所有用户的操作行为数据,所述操作行为数据包括所播放的音频文件的名称和时间、以及所播放的视频文件的名称和时间;If the client supports video file playback and audio file playback, obtain operation behavior data of all users of the client within a preset time period, where the operation behavior data includes the name and Time, and the name and time of the video file being played;根据所述操作行为数据确定音频文件的播放总时长和视频文件的播放总时长;Determining the total playing time of the audio file and the total playing time of the video file according to the operation behavior data;根据所述音频文件的播放总时长和视频文件的播放总时长在所述预定时间段内的占比,确定所述客户端的类别。The category of the client is determined according to the total playing time of the audio file and the proportion of the total playing time of the video file in the predetermined time period.
- 根据权利要求13所述的方法,其特征在于,所述客户端的类别包括视频类型和音频类型,所述根据所述音频文件的播放总时长和视频文件的播放总时长在所述预定时间段内的占比,确定所述客户端的类别,包括:The method according to claim 13, wherein the category of the client includes a video type and an audio type, and the total duration of the playback of the audio file and the total duration of the playback of the video file are within the predetermined period of time To determine the category of the client, including:将所述音频文件的播放总时长在所述预定时间段内的占比记为音频播放占比,将所述视频文件的播放总时长在所述预定时间段内的占比记为视频播放占比;Recording the proportion of the total playing time of the audio file within the predetermined time period as the proportion of audio playback, and recording the proportion of the total playing time of the video file within the predetermined time period as the proportion of video playback ratio;如果所述视频播放占比大于所述音频播放占比,则将所述客户端的类别设定为视频类型;If the video playback ratio is greater than the audio playback ratio, setting the category of the client to a video type;如果所述音频播放占比大于所述视频播放占比,则将所述客户端的类别设定为音频类型。If the audio playback ratio is greater than the video playback ratio, the category of the client is set to the audio type.
- 根据权利要求1-14任一所述的方法,其特征在于,所述将所述多帧图像数据存储至离屏渲染缓冲区,包括:The method according to any one of claims 1-14, wherein the storing the multi-frame image data into an off-screen rendering buffer comprises:判断所述视频文件是否满足预设条件;Judging whether the video file meets a preset condition;若满足,则将所述多帧图像数据存储至离屏渲染缓冲区。If so, the multi-frame image data is stored in an off-screen rendering buffer.
- 根据权利要求15所述的方法,其特征在于,所述判断所述视频文件是否满足预设条件,包括:The method according to claim 15, wherein the determining whether the video file satisfies a preset condition comprises:确定所述视频文件对应的实时性级别;Determining a real-time level corresponding to the video file;判断所述视频文件的实时性级别是否满足预设级别;Determining whether the real-time level of the video file meets a preset level;若满足所述预设级别,则判定所述视频文件满足预设条件;If the preset level is satisfied, determining that the video file meets a preset condition;若不满足所述预设级别,则判定所述视频文件不满足预设条件。If the preset level is not satisfied, it is determined that the video file does not satisfy a preset condition.
- 根据权利要求15所述的方法,其特征在于,所述视频文件为所述电子设备的摄像头采集的人物图像,所述判断所述视频文件是否满足预设条件,包括:The method according to claim 15, wherein the video file is a person image collected by a camera of the electronic device, and the determining whether the video file meets a preset condition comprises:判断所述人物图像是否满足预设人物标准;Determining whether the character image meets a preset character standard;如果满足所述预设人物标准,则判定所述视频文件满足预设条件;If the preset character criterion is satisfied, determining that the video file meets a preset condition;若不满足所述预设人物标准,则判定所述视频文件不满足预设条件。If the preset character criterion is not satisfied, it is determined that the video file does not satisfy a preset condition.
- 一种视频处理装置,其特征在于,应用于电子设备的图像处理器,所述电子设备还包括屏幕,包括:A video processing device is characterized in that it is applied to an image processor of an electronic device, and the electronic device further includes a screen, including:获取单元,用于获取视频文件对应的待渲染的多帧图像数据;An obtaining unit, configured to obtain multi-frame image data to be rendered corresponding to a video file;第一存储单元,用于将所述多帧图像数据存储至离屏渲染缓冲区;A first storage unit, configured to store the multi-frame image data in an off-screen rendering buffer;优化单元,用于根据预设视频增强算法对所述离屏渲染缓冲区内的多帧图像数据进行优化;An optimization unit, configured to optimize the multi-frame image data in the off-screen rendering buffer according to a preset video enhancement algorithm;第二存储单元,用于将优化后的多帧图像数据发送至所述屏幕对应的帧缓冲区;A second storage unit, configured to send the optimized multi-frame image data to a frame buffer corresponding to the screen;显示单元,用于由所述帧缓冲区内读取优化后的多帧图像数据,并在所述屏幕上显示。The display unit is configured to read the optimized multi-frame image data from the frame buffer and display the optimized multi-frame image data on the screen.
- 一种电子设备,其特征在于,包括:An electronic device, comprising:图像处理器;Image processor存储器;Memory屏幕;screen;一个或多个客户端,其中所述一个或多个客户端被存储在所述存储器中并被配置为由所述图像处理器执行,所述一个或多个程序配置用于执行如权利要求1-17任一项所述的方法。One or more clients, wherein the one or more clients are stored in the memory and configured to be executed by the image processor, and the one or more programs are configured to execute as claimed in claim 1 -17. The method of any one of -17.
- 一种计算机可读介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行所述权利要求1-17任一项所述方法。A computer-readable medium, characterized in that the computer-readable storage medium stores program code, and the program code can be called by a processor to execute the method according to any one of claims 1-17.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810969497.6 | 2018-08-23 | ||
CN201810969497.6A CN109218802B (en) | 2018-08-23 | 2018-08-23 | Video processing method and device, electronic equipment and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020038128A1 true WO2020038128A1 (en) | 2020-02-27 |
Family
ID=64989281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/094442 WO2020038128A1 (en) | 2018-08-23 | 2019-07-02 | Video processing method and device, electronic device and computer readable medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109218802B (en) |
WO (1) | WO2020038128A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111683280A (en) * | 2020-06-04 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Video processing method and device and electronic equipment |
CN114710643A (en) * | 2022-03-23 | 2022-07-05 | 广州方硅信息技术有限公司 | Video rendering method, device and equipment in video conference and readable storage medium |
CN114860141A (en) * | 2022-05-23 | 2022-08-05 | Oppo广东移动通信有限公司 | Image display method, image display device, electronic equipment and computer readable medium |
CN117058291A (en) * | 2023-07-12 | 2023-11-14 | 荣耀终端有限公司 | Video memory switching method and electronic equipment |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109218802B (en) * | 2018-08-23 | 2020-09-22 | Oppo广东移动通信有限公司 | Video processing method and device, electronic equipment and computer readable medium |
CN111754607A (en) * | 2019-03-27 | 2020-10-09 | 北京小米移动软件有限公司 | Picture processing method and device, electronic equipment and computer readable storage medium |
CN110211022A (en) * | 2019-05-16 | 2019-09-06 | 北京奇艺世纪科技有限公司 | A kind of image processing method, device and electronic equipment |
CN110147512B (en) * | 2019-05-16 | 2022-12-20 | 腾讯科技(深圳)有限公司 | Player preloading method, player running method, device, equipment and medium |
CN112055131A (en) * | 2019-06-05 | 2020-12-08 | 杭州吉沁文化创意有限公司 | Video processing system and method |
CN112346682A (en) | 2019-08-09 | 2021-02-09 | 北京字节跳动网络技术有限公司 | Image special effect processing method and device, electronic equipment and computer readable storage medium |
CN112419456B (en) * | 2019-08-23 | 2024-04-16 | 腾讯科技(深圳)有限公司 | Special effect picture generation method and device |
CN110599581B (en) * | 2019-08-29 | 2023-03-31 | Oppo广东移动通信有限公司 | Image model data processing method and device and electronic equipment |
CN110908762B (en) * | 2019-11-22 | 2023-05-26 | 珠海豹趣科技有限公司 | Dynamic wallpaper implementation method and device |
CN112860252A (en) * | 2019-11-27 | 2021-05-28 | Oppo广东移动通信有限公司 | Interface drawing method and related product |
CN111415274A (en) * | 2020-02-20 | 2020-07-14 | 浙江口碑网络技术有限公司 | Information display method, device, system, storage medium and computer equipment |
CN112312203B (en) * | 2020-08-25 | 2023-04-07 | 北京沃东天骏信息技术有限公司 | Video playing method, device and storage medium |
CN112184856B (en) * | 2020-09-30 | 2023-09-22 | 广州光锥元信息科技有限公司 | Multimedia processing device supporting multi-layer special effect and animation mixing |
CN114845162B (en) * | 2021-02-01 | 2024-04-02 | 北京字节跳动网络技术有限公司 | Video playing method and device, electronic equipment and storage medium |
CN113076159B (en) * | 2021-03-26 | 2024-02-27 | 西安万像电子科技有限公司 | Image display method and device, storage medium and electronic equipment |
CN112950757B (en) * | 2021-03-30 | 2023-03-14 | 上海哔哩哔哩科技有限公司 | Image rendering method and device |
CN113329173A (en) * | 2021-05-19 | 2021-08-31 | Tcl通讯(宁波)有限公司 | Image optimization method and device, storage medium and terminal equipment |
CN113535105B (en) * | 2021-06-30 | 2023-03-21 | 北京字跳网络技术有限公司 | Media file processing method, device, equipment, readable storage medium and product |
CN114222166B (en) * | 2021-09-29 | 2024-02-13 | 重庆创通联达智能技术有限公司 | Multi-channel video code stream real-time processing and on-screen playing method and related system |
CN116672704A (en) * | 2022-02-28 | 2023-09-01 | 荣耀终端有限公司 | Image processing method, electronic equipment and storage medium |
CN114595021B (en) * | 2022-03-10 | 2023-12-12 | Oppo广东移动通信有限公司 | Method and device for repairing screen, electronic equipment, chip and storage medium |
CN116661790B (en) * | 2023-08-01 | 2023-12-22 | 腾讯科技(深圳)有限公司 | Cross-platform rendering method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630499A (en) * | 2008-07-17 | 2010-01-20 | 新汉电脑股份有限公司 | Video signal processing system and video signal processing method thereof |
CN103096035A (en) * | 2012-12-27 | 2013-05-08 | 广东响石数码科技有限公司 | Monitor with video optimization function |
CN103702182A (en) * | 2014-01-14 | 2014-04-02 | 北京奇艺世纪科技有限公司 | Video playing method and device |
CN104269155A (en) * | 2014-09-24 | 2015-01-07 | 广东欧珀移动通信有限公司 | Method and device for adjusting refreshing rate of screen |
CN109168068A (en) * | 2018-08-23 | 2019-01-08 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109218802A (en) * | 2018-08-23 | 2019-01-15 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109587546A (en) * | 2018-11-27 | 2019-04-05 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976183B (en) * | 2010-09-27 | 2012-02-22 | 广东威创视讯科技股份有限公司 | Method and device for updating images when simultaneously updating multi-window images |
CN103686350A (en) * | 2013-12-27 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Method and system for adjusting image quality |
US20170155890A1 (en) * | 2015-12-01 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method and device for stereoscopic image display processing |
CN105933724A (en) * | 2016-05-23 | 2016-09-07 | 福建星网视易信息系统有限公司 | Video producing method, device and system |
CN106598514B (en) * | 2016-12-01 | 2020-06-09 | 惠州Tcl移动通信有限公司 | Method and system for switching virtual reality mode in terminal equipment |
CN107729095B (en) * | 2017-09-13 | 2020-12-04 | 深信服科技股份有限公司 | Image processing method, virtualization platform and computer-readable storage medium |
CN108055579B (en) * | 2017-12-14 | 2020-05-08 | Oppo广东移动通信有限公司 | Video playing method and device, computer equipment and storage medium |
-
2018
- 2018-08-23 CN CN201810969497.6A patent/CN109218802B/en active Active
-
2019
- 2019-07-02 WO PCT/CN2019/094442 patent/WO2020038128A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630499A (en) * | 2008-07-17 | 2010-01-20 | 新汉电脑股份有限公司 | Video signal processing system and video signal processing method thereof |
CN103096035A (en) * | 2012-12-27 | 2013-05-08 | 广东响石数码科技有限公司 | Monitor with video optimization function |
CN103702182A (en) * | 2014-01-14 | 2014-04-02 | 北京奇艺世纪科技有限公司 | Video playing method and device |
CN104269155A (en) * | 2014-09-24 | 2015-01-07 | 广东欧珀移动通信有限公司 | Method and device for adjusting refreshing rate of screen |
CN109168068A (en) * | 2018-08-23 | 2019-01-08 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109218802A (en) * | 2018-08-23 | 2019-01-15 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109587546A (en) * | 2018-11-27 | 2019-04-05 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111683280A (en) * | 2020-06-04 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Video processing method and device and electronic equipment |
CN114710643A (en) * | 2022-03-23 | 2022-07-05 | 广州方硅信息技术有限公司 | Video rendering method, device and equipment in video conference and readable storage medium |
CN114860141A (en) * | 2022-05-23 | 2022-08-05 | Oppo广东移动通信有限公司 | Image display method, image display device, electronic equipment and computer readable medium |
CN117058291A (en) * | 2023-07-12 | 2023-11-14 | 荣耀终端有限公司 | Video memory switching method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109218802A (en) | 2019-01-15 |
CN109218802B (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020038128A1 (en) | Video processing method and device, electronic device and computer readable medium | |
WO2020038130A1 (en) | Video processing method and apparatus, electronic device, and computer-readable medium | |
CN109242802B (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
CN109379625B (en) | Video processing method, video processing device, electronic equipment and computer readable medium | |
WO2020108018A1 (en) | Game scene processing method and apparatus, electronic device, and storage medium | |
US11418832B2 (en) | Video processing method, electronic device and computer-readable storage medium | |
CN109525901B (en) | Video processing method and device, electronic equipment and computer readable medium | |
US20210287631A1 (en) | Video Processing Method, Electronic Device and Storage Medium | |
WO2020107989A1 (en) | Video processing method and apparatus, and electronic device and storage medium | |
US20210281718A1 (en) | Video Processing Method, Electronic Device and Storage Medium | |
CN109587546B (en) | Video processing method, video processing device, electronic equipment and computer readable medium | |
WO2020038127A1 (en) | Decoding method and apparatus, electronic device, and storage medium | |
US11490157B2 (en) | Method for controlling video enhancement, device, electronic device and storage medium | |
US11153525B2 (en) | Method and device for video enhancement, and electronic device using the same | |
CN109587558B (en) | Video processing method, video processing device, electronic equipment and storage medium | |
US11562772B2 (en) | Video processing method, electronic device, and storage medium | |
WO2020108060A1 (en) | Video processing method and apparatus, and electronic device and storage medium | |
CN109587555B (en) | Video processing method and device, electronic equipment and storage medium | |
CN109587561B (en) | Video processing method and device, electronic equipment and storage medium | |
CN113934500A (en) | Rendering method, rendering device, storage medium and electronic equipment | |
CN109218803B (en) | Video enhancement control method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19852392 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19852392 Country of ref document: EP Kind code of ref document: A1 |