WO2020108091A1 - 视频处理方法、装置、电子设备及存储介质 - Google Patents

视频处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020108091A1
WO2020108091A1 PCT/CN2019/109855 CN2019109855W WO2020108091A1 WO 2020108091 A1 WO2020108091 A1 WO 2020108091A1 CN 2019109855 W CN2019109855 W CN 2019109855W WO 2020108091 A1 WO2020108091 A1 WO 2020108091A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
level
enhancement
image processing
algorithm
Prior art date
Application number
PCT/CN2019/109855
Other languages
English (en)
French (fr)
Inventor
胡小朋
杨海
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020108091A1 publication Critical patent/WO2020108091A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast

Definitions

  • the present application relates to the technical field of electronic equipment, and more specifically, to a video processing method, device, electronic equipment, and storage medium.
  • the present application proposes a video processing method, device, electronic device, and storage medium to improve the above problems.
  • an embodiment of the present application provides a video processing method, the method includes: receiving a target level selected from a plurality of different levels corresponding to video enhancement, where the levels of video enhancement are different, and the corresponding enhancement processing method The enhanced image quality of the video is different; the enhancement processing method corresponding to the target level is acquired; the video is enhanced by the acquired enhancement processing method, and the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
  • an embodiment of the present application provides a video processing device, the device includes: a level receiving module configured to receive a target level selected from a plurality of different levels corresponding to video enhancement, wherein the levels of video enhancement are different , The corresponding enhancement processing method is different for the enhanced image quality of the video; the processing method acquisition module is used to acquire the enhancement processing method corresponding to the target level; the processing module is used to enhance the video through the acquired enhancement processing method, the enhancement The processing improves the quality of the video frames of the video by adjusting the image parameters of the video.
  • an embodiment of the present application provides an electronic device, including: one or more processors; a memory; and one or more programs. Wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs are configured to perform the above method.
  • an embodiment of the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
  • FIG. 1 shows a schematic flowchart of video playback provided by an embodiment of the present application.
  • FIG. 2 shows a flowchart of a video processing method provided by an embodiment of the present application.
  • FIG. 3 shows a display interface diagram of level selection provided by an embodiment of the present application.
  • FIG. 4 shows another display interface diagram of level selection provided by an embodiment of the present application.
  • FIG. 5 shows yet another display interface diagram of level selection provided by an embodiment of the present application.
  • FIG. 6 shows a flowchart of a video processing method provided by another embodiment of the present application.
  • FIG. 7 shows a correspondence table provided by an embodiment of the present application.
  • FIG. 8 shows another correspondence table provided by an embodiment of the present application.
  • FIG. 9 shows another correspondence table provided by the embodiment of the present application.
  • FIG. 10 shows a flowchart of a video processing method provided by another embodiment of the present application.
  • FIG. 11 shows a flowchart of a video processing method provided by still another embodiment of the present application.
  • FIG. 12 shows a functional block diagram of a video processing device provided by an embodiment of the present application.
  • FIG. 13 shows a structural block diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 14 is a storage unit for storing or carrying program code for implementing a video processing method according to an embodiment of the present application.
  • FIG. 1 shows a video playback process.
  • the next job is to analyze the audio and video data.
  • General video files are composed of video stream and audio stream. Different video formats have different audio and video packaging formats.
  • the process of synthesizing audio and video streams into files is called muxer, while the process of separating audio and video streams from media files is called demuxer.
  • muxer the process of synthesizing audio and video streams into files
  • demuxer To play a video file, you need to separate the audio stream and the video stream from the file stream and decode them separately.
  • the decoded video frame can be directly rendered, and the corresponding audio can be sent to the buffer of the audio output device for playback.
  • the video Rendering and audio playback timestamps need to be controlled for synchronization.
  • each video frame is each frame image corresponding to the video.
  • video decoding may include hard decoding and soft decoding.
  • hard decoding a part of the video data that is originally handed over to the Central Processing Unit (CPU) for processing is handed over to the graphics processor (Graphics Processing Unit, GPU) To do it, and the GPU's parallel computing capability is much higher than that of the CPU, which can greatly reduce the load on the CPU. After the CPU usage is reduced, you can run some other programs at the same time.
  • GPU Graphics Processing Unit
  • i5 2320 or any quad-core processor of AMD, it can perform both hard decoding and soft decoding.
  • the multimedia framework obtains the video file to be played by the client through an API interface with the client, and hands it to the video codec (Video Decode).
  • Media is the multimedia framework in the Android system
  • MediaPlayer, MediaPlayerService and Stagefrightplayer constitute the basic framework of Android multimedia.
  • the multimedia framework part adopts the C/S structure.
  • MediaPlayer serves as the client terminal of the C/S structure.
  • MediaPlayerService and Stagefrightplayer serve as the server terminal of the C/S structure. They assume the responsibility of playing multimedia files.
  • Video decoder Video Decode is a super decoder that integrates the most commonly used audio and video decoding and playback for decoding video data.
  • soft decoding that is, let the CPU decode the video through software.
  • Hard decoding means that the video decoding task can be completed independently through a dedicated daughter card device without resorting to the CPU.
  • the decoded video data will be sent to the layer transfer module (SurfaceFlinger), as shown in Figure 1, the hard decoded video data is sent to the video driver SurfaceFlinger.
  • SurfaceFlinger renders and synthesizes the decoded video data and displays it on the display.
  • SurfaceFlinger is an independent Service, which receives all Window’s Surface as input, calculates the position of each Surface in the final composite image according to ZOrder, transparency, size, position and other parameters, and then hands it to HWComposer or OpenGL to generate the final The display buffer is then displayed on a specific display device.
  • the CPU decodes the video data to SurfaceFlinger for rendering and synthesis
  • hard decoding is decoded by the GPU and then rendered and synthesized by SurfaceFlinger.
  • the SurfaceFlinger will call the GPU to render and synthesize the image and display it on the display.
  • the video enhancement can be enhanced.
  • the enhancement can be performed after decoding, and then rendered and synthesized after the enhancement and displayed on the display screen.
  • the enhancement process improves the image quality of the video frame by adjusting the image parameters of the video frame in the video, improves the display effect of the video, and obtains a better viewing experience.
  • the image quality of the video frame can include parameters such as sharpness, sharpness, saturation, detail, lens distortion, color, resolution, color gamut range, and purity.
  • each enhancement processing method includes a corresponding image processing algorithm for performing image processing on the video frame to adjust the image of the video frame Parameters to improve the quality of the video frame.
  • FIG. 2 shows a video processing method provided by an embodiment of the present application.
  • the video processing method is used to obtain the enhanced processing mode corresponding to the target level when receiving the target level selected from a plurality of different levels.
  • the enhancement processing methods corresponding to different levels are different, so that users can select corresponding enhancement processing methods according to actual needs, to achieve differentiation of video enhancement processing, and to improve user experience.
  • the video processing method is applied to the video processing device 500 shown in FIG. 12 and the electronic device 600 (FIG. 13) equipped with the video processing device 500.
  • the following uses electronic equipment as an example to describe the specific process of this embodiment.
  • the electronic equipment used in this embodiment may be a smart phone, tablet computer, desktop computer, wearable electronic device, vehicle-mounted device, Various devices that can perform video processing, such as a gateway, are not specifically limited here.
  • the method includes:
  • Step S110 Receive a target level selected from a plurality of different levels corresponding to video enhancement.
  • the level of video enhancement is different, and the corresponding enhancement processing method has different enhancement image quality for video.
  • the electronic device can display the video after decoding, enhancing, and rendering the acquired video data.
  • the electronic device may obtain video data from the server, may obtain video data locally, or may obtain video data from other electronic devices.
  • the video data when the video data is acquired from the server by the electronic device, then the video data may be downloaded by the electronic device from the server, or the electronic device is acquired online from the server.
  • the video data may be video data downloaded by the electronic device through the installed video playback software or obtained online in the video playback software.
  • the server may be a cloud server.
  • the video data When the video data is acquired locally from the electronic device, the video data may be pre-downloaded by the electronic device and stored in the local storage.
  • the video data when the video data is acquired by the electronic device from other electronic devices, the video data can be transmitted from the other electronic devices to the electronic device through a wireless communication protocol, for example, through the WLAN protocol, Bluetooth protocol, ZigBee protocol, or WiFi protocol, etc. It is transmitted from other electronic devices to the electronic device through a data network, for example, a 2G network, a 3G network, or a 4G network, etc., which is not limited herein.
  • the electronic device obtains the video data, and then decodes, renders and synthesizes the video data, and then plays it through the display. If the relevant control instructions for video enhancement are received, the video data is enhanced, and the video after the video enhancement is played.
  • multiple different levels can be set for video enhancement, and different levels of video enhancement have corresponding enhancement processing methods.
  • different levels of video enhancement have corresponding enhancement processing methods to enhance the image quality of the video different.
  • the number of levels to be set is not limited, and only two levels may be set, or three or more levels may be set.
  • the user can select a level from multiple set different levels to enhance the video, that is, the user selects a level to enhance the video through the enhancement processing method corresponding to the level to achieve the enhancement of the image quality corresponding to the level effect.
  • the selected level for enhancing processing is defined as the target level.
  • the target level selected from multiple different levels corresponding to the video enhancement may be determined by the electronic device when the video is turned on. For example, the default setting of an application that plays a video is to turn on a certain level of video enhancement, then when the video is turned on, that level is used as the selected target level. For another example, when an application that plays a video was turned on the previous time for a certain level of video enhancement, when the video in the application is opened again, it is determined that the video enhancement of that level is received, and the level is used as the target level. Or it is a certain level of video enhancement that was opened when the video was closed the previous time, and when the video is opened again, the level is used as the selected target level.
  • the target level may also be determined by user selection received during video playback.
  • the electronic device may receive a user's selection instruction, and use the video enhancement level corresponding to the selection instruction as the target level.
  • the two levels are high level and low level, as shown in Figure 3, set high level video enhancement switch and low level video enhancement switch, respectively corresponding to high level video Enhancement and low-level video enhancement.
  • the open selection is a user's selection instruction
  • the video enhancement level corresponding to the selection instruction is low Level
  • the received target level selected from a number of different levels corresponding to the video enhancement is a low level.
  • a video enhancement switch that can switch between high level and low level is set. If the user switches the video enhancement switch to low level enhancement as shown in FIG. 5, the operation of starting the switch is the user's selection instruction.
  • the level of video enhancement corresponding to the selection instruction is a low level, and the target level selected from a plurality of different levels corresponding to the video enhancement is received as a low level.
  • the video enhancement switch can be hidden. When a touch such as clicking on the video is received, the video enhancement switch is displayed, and the video enhancement switch is placed in a controllable state. When the video does not receive the user's touch operation for a certain period of time, hide the video enhancement switch again.
  • the manner of selecting the target level may also include other methods. It may be that the power of the electronic device is acquired, and the target level is selected through the magnitude relationship between the power of the electronic device and the target power. For example, a plurality of different levels include a level with low power consumption during enhanced processing, and the level with low power consumption during enhanced processing has lower power consumption during enhanced processing than other levels.
  • the power of the electronic device may be acquired to determine whether the power of the electronic device is less than the target power.
  • the enhanced processing is enhanced processing corresponding to a level with high power consumption. If it is determined that the power of the electronic device is less than the target power, the level with low power consumption is taken as the selected target level.
  • the specific power value of the target power is not limited in the embodiments of the present application, and may be 30%, 20%, etc. of the total power of the electronic device.
  • the target power can also be stored in the electronic device after being set by the user.
  • Whether this embodiment is implemented in the electronic device can be determined by user settings. Specifically, the user can set whether to switch the enhanced processing of low power consumption when the power is low. If the user sets the low power consumption to increase when the low power is set, the power of the electronic device can be obtained under the condition that the video is enhanced; it is determined whether the power of the electronic device is less than the target power. When the power of the electronic device is less than the target power, the level with low power consumption is taken as the selected target level.
  • Step S120 Obtain the enhanced processing mode corresponding to the target level.
  • the enhancement processing mode corresponding to the target level can be obtained according to the correspondence between the level and the enhancement processing mode.
  • Step S130 Enhancement processing is performed on the video through the acquired enhancement processing method, and the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
  • Each enhancement processing method includes an image processing algorithm to achieve a corresponding enhancement processing effect.
  • the video is enhanced by the image processing algorithm included in the enhancement processing method, and the image parameters of the video frame are adjusted by the image processing algorithm included in the enhancement processing method, thereby adjusting the video frame Related parameters of image quality improve video quality.
  • the enhancement processing of the video through the enhancement processing method is to perform image processing on the video data corresponding to each video frame for the image processing algorithm included in the enhancement processing method.
  • multiple different levels can be set for the enhancement processing of the video.
  • the video is enhanced by the enhancement processing method corresponding to the target level.
  • the selection of the enhancement processing method may be various, and the level of video enhancement may be selected according to requirements to realize the differentiation of video enhancement and improve the user experience of video enhancement.
  • Another embodiment of the present application provides a video processing method, and selects the enhancement processing mode corresponding to the target level according to the correspondence between the level and the enhancement processing mode.
  • the method includes:
  • Step S210 Receive a target level selected from a plurality of different levels corresponding to video enhancement.
  • the level of video enhancement is different, and the corresponding enhancement processing method has different enhancement image quality for video.
  • step S110 For this step, reference may be made to step S110, which will not be repeated here.
  • Step S220 Look up the level parameter corresponding to the target level from the correspondence table of different levels and enhanced processing methods.
  • Step S230 Determine the enhanced processing mode corresponding to the found level parameter as the enhanced processing mode corresponding to the target level.
  • each enhancement processing method includes one or more image processing algorithms to achieve corresponding processing effects.
  • the different levels of video enhancement may be different levels, and the higher the level, the better the video enhancement effect, so that the user can select a better enhancement effect or a general enhancement processing method by selecting different levels of high and low.
  • the level of video enhancement is different, and the corresponding enhancement processing method includes different image processing algorithms.
  • the higher the level of video enhancement the better the enhancement image quality of the video by the corresponding enhancement processing method.
  • the better the picture quality may include the higher the definition, the smaller the noise, the clearer the details, the higher the saturation, etc., the better the video picture quality, the better the user viewing experience.
  • the higher the level of video enhancement the more types of image processing algorithms included for different image processing purposes.
  • the level of video enhancement includes a third level, a second level, and a first level that are sequentially increased
  • the types of image processing algorithms that are sequentially included in the third level to the first level increase in sequence.
  • commonly used image processing algorithms that enhance the quality of video include image processing algorithms to increase brightness, image processing algorithms to adjust saturation, image processing algorithms to adjust contrast, image processing algorithms to adjust details, and deblocking images
  • image processing algorithms such as processing algorithms, image processing algorithms for removing edge aliasing, and image processing algorithms for removing striping.
  • the first level of video enhancement can include image processing algorithms to increase brightness, image processing algorithms to adjust saturation, image processing algorithms to adjust contrast, image processing algorithms to adjust details, image processing algorithms to deblocking, and anti-aliased images Processing algorithms and de-striping image processing algorithms.
  • the second level of video enhancement includes image processing algorithms to increase brightness, image processing algorithms to adjust saturation, image processing algorithms to adjust contrast, and image processing algorithms to adjust details.
  • the third level of video enhancement includes deblocking image processing algorithms, edge-aliasing image processing algorithms, and striping image processing algorithms.
  • different levels of video enhancement may have different parameter settings of the image processing algorithm in the corresponding enhancement processing mode, so that the higher the level, the better the enhanced image quality.
  • the higher the enhancement processing method the higher the accuracy of the included image processing algorithm.
  • the enhancement processing mode corresponding to the target level may be obtained through the correspondence relationship table between the level and the enhancement processing mode.
  • each level may correspond to a level parameter, that is, different level parameters correspond to different levels, and each level parameter represents a video enhancement level.
  • the electronic device may store a correspondence table between the level parameters and the enhanced processing method.
  • the correspondence table may be downloaded simultaneously when the electronic device downloads the video application, or may be downloaded when downloading the video-enhanced plug-in, or may be performed by the system of the electronic device. To download or update when updating, it can be pushed to the electronic device when the server has a new correspondence table, or it can be requested by the electronic device regularly from the server, or the electronic device needs to use the correspondence table, if you need to pass the correspondence
  • the relationship table is obtained from the server when searching for the enhanced processing method corresponding to the target level. How and when the correspondence table is obtained is not limited in the embodiments of the present application.
  • each level parameter corresponds to an enhancement processing method
  • the enhancement processing method corresponding to the level parameter is an enhancement processing method that performs enhancement processing at the level corresponding to the level parameter.
  • the enhanced processing method in the correspondence relationship table can also be expressed by a method parameter.
  • the level parameter corresponding to the target level is obtained. Then, the level parameter is searched from the correspondence table, and the enhanced processing method corresponding to the found level parameter is determined to be the enhanced processing method corresponding to the target level.
  • the levels of video enhancement include the first level, the second level, and the third level.
  • the correspondence table is shown in FIG. 7, and the three level parameters A, B, and C correspond to the enhancement processing methods a, b, and c, respectively.
  • the three grade parameters A, B, and C represent the first grade, the second grade, and the third grade, respectively, where the grade parameter A represents the first grade, the grade parameter B represents the second grade, and the grade parameter C represents the third grade .
  • a, b, and c represent different enhancement processing methods, respectively.
  • the image processing algorithm corresponding to the enhancement processing mode can be determined.
  • each enhanced processing mode corresponds to a mode parameter
  • the mode parameter corresponds to one or more algorithm identity parameters
  • each algorithm identity parameter represents an image processing algorithm.
  • the mode parameter corresponding to the enhanced processing method can be obtained; through this mode parameter, the corresponding algorithm identity parameter is obtained. Since each algorithm identity parameter represents an image processing algorithm, the enhanced processing method can be determined.
  • the video can be enhanced by the image processing algorithm represented by the acquired algorithm identity parameter, that is, the image processing algorithm corresponding to the enhanced processing mode can be called to enhance the video.
  • the image processing algorithm corresponding to each enhanced processing method may also be packaged as a program module. After the enhanced processing mode is determined, the program module corresponding to the enhanced processing mode is called.
  • the resolution is lower, the noise in the video is serious, the image in the video frame is blurred, the edge of the image is not clear, and there is edge noise.
  • high-definition video such as high-definition
  • the noise is small and the image is clear. Due to the high resolution video and low resolution video, the characteristics of the images in the video frames are different.
  • the same level of video enhancement if all are processed by the same enhancement processing method, the processed video may be processed The picture quality is completely different and may not be ideal for video enhancement. For example, sharpening can make high-definition images clearer, but sharpening and denoising are contradictory.
  • edge noise may be amplified instead. Therefore, for videos with different resolutions, to achieve the same or similar video quality through enhancement, different image processing algorithms, corresponding to videos with different resolutions, and enhancement processing methods are required.
  • different correspondence tables may be set according to the characteristics of different resolutions, corresponding to different resolutions of the video.
  • the enhancement processing methods corresponding to the same level parameter are different. That is to say, for the same level of video enhancement, if the resolution of the video is different, you can choose different enhancement processing methods.
  • the electronic device can obtain the resolution of the video according to the video data.
  • the resolution of the video is a parameter used to measure the amount of data in the video frame, which can be expressed in the form of W*H, where W refers to the effective pixels of the video frame in the horizontal direction, and H refers to the video frame in the vertical direction Effective pixels on.
  • the way to obtain the resolution of the video may be that the electronic device decodes the video to obtain decoded video data, the video data includes the resolution of the video, and extracts the resolution of the video from the video data.
  • the decoded video data there is a data portion corresponding to the storage resolution, and the data portion may be a piece of data. Therefore, you can obtain the data portion corresponding to the resolution from the decoded data of the video, and then obtain the resolution of the video from the data portion corresponding to the resolution.
  • H.264 which is also the tenth part of MPEG-4, is a joint video group (JVT, Joint Video Team) composed of the ITU-T Video Coding Expert Group (VCEG) and the ISO/IEC Motion Picture Expert Group (MPEG) ) Proposed high compression digital video codec standard.
  • JVT Joint Video Team
  • the stream information of the code stream includes the resolution of the video, and the stream information of the code stream is stored in a special structure, called SPS (SequenceParameterSet), the SPS is after decoding The data portion corresponding to the resolution in the data.
  • SPS SequenceParameterSet
  • the H.264 code stream According to the format information of the H.264 code stream, in the H.264 code stream, 0x000x00 0x01 or 0x00 0x00 0x00 0x01 as the start code, so by detecting whether the last five bits of the first byte after the start code is 7 (00111 ) To determine whether it is SPS. After obtaining the SPS, you can parse out the resolution of the video.
  • pic_width_in_mbs_minus1 and pic_height_in_map_units_minus_1 which represent the width and height of the image, respectively, and both are reduced by 1 in units of 16 (16*16 blocks in area), so the actual width is (pic_width_in_mbs_minus1+1)*16, the height is (pic_height_in_map_units_minus_1+1)*16, that is, the W corresponding to the above resolution is (pic_width_in_mbs_minus1+1)*16, and the H corresponding to the above resolution is (pic_height_in_map_units_minus_1+1)*16 .
  • a correspondence table corresponding to the resolution may be determined, and the determined correspondence table may be used as a correspondence table for searching for level parameters. That is to say, look up the level parameter corresponding to the target level in the correspondence table corresponding to the resolution of the video, and then obtain the enhancement processing mode corresponding to the level parameter as the enhancement processing mode corresponding to the target level.
  • the level of video enhancement includes the first level, the second level, and the third level
  • the resolution includes the first resolution and the second resolution.
  • the correspondence table corresponding to the first resolution is shown in FIG. 8, and the three level parameters A1, B1, and C1 correspond to the enhancement processing methods a1, b1, and c1, respectively.
  • the correspondence table corresponding to the second resolution is shown in FIG. 9, and the three level parameters A2, B2, and C2 correspond to the enhancement processing modes a2, b2, and c2, respectively.
  • the enhancement processing method corresponding to the target level can be obtained as a1; if the received target level is the first level and the video resolution If the rate is the second resolution, the enhancement processing method corresponding to the target level can be obtained as a2.
  • the image processing algorithm included in the enhancement processing mode corresponding to each level is not limited.
  • the first resolution may be set to indicate low resolution
  • the second resolution may be set to high resolution.
  • the first resolution and the second resolution may respectively represent more than one resolution.
  • the first resolution may be the resolution corresponding to the standard definition video and the smooth video, for example, the first resolution is 240p, 360p, and 480p.
  • 240p means the lowest resolution is 480x240
  • 360p means the lowest resolution is 640x360
  • 480p means the lowest resolution is 720*480
  • the second resolution may be the resolution corresponding to high-definition video and ultra-high-definition video.
  • the second resolution may be 720p or 1080p, where 720p indicates a minimum resolution of 1280x720; 1080p indicates a minimum resolution of 1920x1080.
  • the video is noisy and the display is blurry.
  • the distance between effective pixels is enlarged during magnification, and the electronic device fills the space between effective pixels through interpolation, and the pixels used for interpolation are based on Pixel calculation is not real video information, so that the displayed video image has more information than the video itself, so that the video has greater noise.
  • the interpolated pixels obtained through calculation make the edge generate pixel blocks, that is, block effect, that is, mosaic, so that the edge is blurry and not clear enough to form edge noise. Therefore, in the correspondence table corresponding to the first resolution, the image processing algorithm included in the enhancement processing method may emphasize deblocking, edge sawing, and striping.
  • the image processing algorithm included in the enhanced processing mode may further include weakened Detailed image processing algorithms and image processing algorithms to reduce saturation to reduce edge noise.
  • the enhancement processing method corresponding to each level may include one or more image processing algorithms in deblocking, edge aliasing, and striping.
  • the higher the level of video enhancement the better the quality of the enhancement processing.
  • the enhancement processing method corresponding to the first level may include an image processing algorithm to increase brightness and reduce saturation.
  • Image processing algorithm image processing algorithm for improving contrast, image processing algorithm for weakening details, image processing algorithm for deblocking effect, image processing algorithm for removing edge and aliasing, and image processing algorithm for removing striping; enhanced processing corresponding to the second level
  • the methods can include image processing algorithms to increase brightness, image processing algorithms to reduce saturation, image processing algorithms to improve contrast, and image processing algorithms to weaken details, as well as deblocking image processing algorithms and edge-aliased images
  • the processing algorithm to remove the image processing algorithm of the banding; the enhancement processing mode corresponding to the third level may include only the image processing algorithm of the deblocking effect, the image processing algorithm of the edge anti-aliasing, and the image processing algorithm of the banding removal.
  • the image processing algorithms included in the enhancement processing method may focus on image processing algorithms to increase brightness, image processing algorithms to increase saturation, image processing algorithms to increase contrast, and details to enhance Image processing algorithms, etc.
  • the enhancement processing method corresponding to each level may include an image processing algorithm to increase brightness, an image processing algorithm to increase saturation, an image processing algorithm to increase contrast, and details One or more image processing algorithms in the enhanced image processing algorithm. For example, the higher the level of video enhancement, the better the quality of the enhancement processing.
  • the enhancement processing method corresponding to the first level may include an image processing algorithm to increase brightness and increase saturation.
  • enhanced processing corresponding to the second level Methods can include image processing algorithms for deblocking, image processing algorithms for de-aliasing, and some of the image processing algorithms for striping.
  • Step S240 Perform enhancement processing on the video through the acquired enhancement processing method.
  • the image processing algorithm included in the enhancement processing mode corresponding to the target level processes the video to obtain the enhancement processing effect corresponding to the target level.
  • the enhancement processing mode corresponding to the target level is determined according to the correspondence between the level and the enhancement processing mode.
  • the selected processing methods are different, so as to realize the differential processing of videos of different resolutions, and obtain better video processing effects.
  • the present application also provides an embodiment.
  • different levels of enhanced processing methods can be set by the user.
  • the image processing algorithms corresponding to different levels can be set by the user to differentiate and enhance the processing video closer to the user's needs.
  • the method provided by the embodiment of the present application includes:
  • Step S310 Receive an algorithm setting request, where the algorithm setting request includes the level of algorithm setting.
  • a setting entry can be provided for the user to set an enhancement processing method. Specifically, an image processing algorithm corresponding to each level is set.
  • the user can enter the setting interface through the setting portal, and set the algorithm for each level.
  • it can receive the algorithm setting request submitted by the user corresponding to each level.
  • a user submits an algorithm setting request for algorithm setting of a certain level, and the algorithm setting request carries the level parameter of the level.
  • the algorithm setting request is received, the user can determine the level of algorithm setting according to the level parameter.
  • Step S320 Display various image processing algorithms.
  • a variety of image processing algorithms that can be selected by the user are displayed on the display screen. Among them, since the user is more aware of the effect than the name of the algorithm itself, the image processing algorithm described by the processing effect may be displayed. For example, the loop deblocking filtering algorithm is used to remove the blocking effect, but if the displayed image processing algorithm is loop deblocking filtering, the user may not understand the processing purpose of the image processing algorithm, so it can be displayed with a functional description, Such as deblocking or mosaic.
  • the specific processing algorithms to be displayed are not limited in the embodiments of the present application. For example, image processing algorithms to increase brightness, image processing algorithms to adjust saturation, image processing algorithms to adjust contrast, image processing algorithms to adjust details, and deblocking effects may be displayed. Image processing algorithm, image processing algorithm to remove edge sawtooth and image processing algorithm to remove striping, etc.
  • the image processing algorithms displayed may be the same in response to the algorithm setting requests of various levels.
  • the algorithm setting requests of different levels and the image processing algorithms displayed may also be different.
  • Step S330 Receive any one or more selected from the plurality of image processing algorithms.
  • Step S340 Use the selected image processing algorithm as the image processing algorithm included in the enhancement processing mode corresponding to the level in the algorithm setting request.
  • the user can choose from the displayed image processing algorithms according to actual processing requirements, and one or more options are not limited.
  • the user can trigger the selection of the image processing algorithm to submit the selection of the image processing algorithm through completion, confirmation, and the like.
  • the image processing algorithm selected by the user is used as the image processing algorithm corresponding to the set level, so that the user can perform various levels according to his preference Differentiated settings.
  • the user can select the first level in the setting interface. Display image processing algorithms for improving brightness, image processing algorithms for adjusting saturation, image processing algorithms for adjusting contrast, image processing algorithms for adjusting details, image processing algorithms for deblocking effects, and anti-aliasing images on the setting interface of the first level Processing algorithms and de-striping image processing algorithms. If the user confirms that the image processing algorithm is selected as the image processing algorithm for adjusting saturation and the image processing algorithm for deblocking, the image processing algorithm included in the enhancement processing method corresponding to the first level is set to the image processing algorithm for adjusting saturation and the image processing algorithm for adjusting saturation Block effect image processing algorithm.
  • Step S350 Receive a target level selected from a plurality of different levels corresponding to the video enhancement, where the level of the video enhancement is different, and the corresponding enhancement processing method has a different enhancement image quality for the video.
  • Step S360 Obtain the enhanced processing mode corresponding to the target level.
  • Step S370 Perform enhancement processing on the video through the acquired enhancement processing method.
  • the enhancement processing method set by the user corresponding to the target level is acquired, so that when the enhancement is performed according to the target level, the enhancement effect desired by the user is achieved to better satisfy different users Of individuals enhance their preferences.
  • the present application also provides an embodiment.
  • a user may select an enhancement algorithm corresponding to a specific desired enhancement effect from image processing algorithms corresponding to a target level.
  • the method provided in this embodiment includes:
  • Step S410 Receive a target level selected from a plurality of different levels corresponding to video enhancement, where the level of video enhancement is different, and the corresponding enhancement processing method has different enhancement image quality for the video.
  • Step S420 Display various image processing algorithms corresponding to the target level.
  • a variety of image processing algorithms can be set.
  • the image processing algorithm corresponding to the target level can be displayed.
  • the user can choose from a variety of image processing algorithms. When specifically displayed, it can also be displayed through the functional description.
  • Step S430 Receive a selection of any one or more of the multiple image processing algorithms.
  • Step S440 Use the selected image processing algorithm as the image processing algorithm included in the enhancement processing mode corresponding to the target level.
  • the image processing algorithm selected by the user is used as the image processing algorithm included in the enhancement processing method corresponding to the target level. That is to say, the image processing algorithm included in the enhancement processing mode corresponding to the target level is the image processing algorithm selected by the user corresponding to the target level.
  • Step S450 Perform enhancement processing on the video through the acquired enhancement processing method.
  • the video is processed through the enhanced processing method corresponding to the target level, and the image processing algorithm that specifically processes the video is an image processing algorithm selected by the user from a variety of image processing algorithms corresponding to the target level.
  • the user after selecting the level, can select from the image processing algorithms corresponding to the target level to select an image processing algorithm that meets the current video processing requirements. For example, the current power consumption of electronic devices is low, and users can choose fewer image processing algorithms to meet the basic video enhancement requirements to reduce power consumption during video enhancement. For low-resolution videos, users can only choose to remove edges. Sawtooth image processing algorithm.
  • the device 500 includes: a level receiving module 510, configured to receive a target level selected from a plurality of different levels corresponding to video enhancement, where the video enhancement Different levels, the corresponding enhancement processing methods have different enhancements to the video quality.
  • the processing mode obtaining module 520 is used to obtain an enhanced processing mode corresponding to the target level.
  • the processing module 530 is configured to perform enhancement processing on the video through the acquired enhancement processing method, and the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
  • the device may further include a setting module for displaying multiple image processing algorithms; receiving any one or more selected from the multiple image processing algorithms; and using the selected image processing algorithm as the algorithm The image processing algorithm included in the enhancement processing method corresponding to the level in the setting request.
  • a setting module for displaying multiple image processing algorithms; receiving any one or more selected from the multiple image processing algorithms; and using the selected image processing algorithm as the algorithm The image processing algorithm included in the enhancement processing method corresponding to the level in the setting request.
  • the processing mode acquisition module 520 may include an algorithm display unit for displaying a plurality of image processing algorithms corresponding to the target level; a selection receiving unit for receiving any one of the plurality of image processing algorithms Or multiple choices; a processing mode determination unit, configured to use the selected image processing algorithm as the image processing algorithm included in the enhancement processing mode corresponding to the target level.
  • the electronic device may store a correspondence table between level parameters and enhanced processing methods, and different level parameters correspond to different levels.
  • the processing mode obtaining module 520 may include a parameter search unit for searching the level parameter corresponding to the target level from the correspondence table; a mode determining unit for determining the enhanced processing mode corresponding to the found level parameter as the Enhanced processing method corresponding to the target level.
  • the processing mode obtaining module 520 may further include a resolution obtaining unit for obtaining the resolution of the video; a relationship table determining unit for determining the corresponding relationship table corresponding to the resolution to determine the corresponding relationship table as a search level Correspondence table of parameters.
  • the level of video enhancement is different, and the corresponding enhancement processing method includes different image processing algorithms, so that the higher the level of video enhancement, the corresponding enhancement processing method enhances the image quality of the video The better.
  • the first level of video enhancement includes image processing algorithms to increase brightness, image processing algorithms to adjust saturation, image processing algorithms to adjust contrast, image processing algorithms to adjust details, image processing algorithms to deblocking, and Edge-toothed image processing algorithms and de-banding image processing algorithms;
  • the second level of video enhancement includes image processing algorithms to increase brightness, image processing algorithms to adjust saturation, image processing algorithms to adjust contrast, and image processing algorithms to adjust details ;
  • the third level of video enhancement includes deblocking image processing algorithms, de-aliasing image processing algorithms, and striping image processing algorithms, where the third level, second level, and first level increase in order .
  • the coupling between the modules may be electrical, mechanical, or other forms of coupling.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above integrated modules may be implemented in the form of hardware or software function modules.
  • FIG. 13 shows a structural block diagram of an electronic device 600 provided by an embodiment of the present application.
  • the electronic device 600 may be an electronic device capable of video processing, such as a smart phone or a tablet computer.
  • the electronic device has one or more processors 610 (only one shown in the figure), a memory 620, and one or more programs.
  • the one or more programs are stored in the memory 620, and are configured to be executed by the one or more processors 610.
  • the one or more programs are configured to perform the method described in the foregoing embodiments.
  • the processor 610 may include one or more processing cores.
  • the processor 610 uses various interfaces and lines to connect various parts of the entire electronic device 600, executes or executes instructions, programs, code sets or instruction sets stored in the memory 620, and calls data stored in the memory 620 to execute Various functions and processing data of the electronic device 600.
  • the processor 610 may adopt at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA) Various hardware forms.
  • the processor 610 may integrate one or a combination of one of a central processing unit (Central Processing Unit, CPU), an image processing unit (Graphics Processing Unit, GPU), and a modem.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • modem modem
  • CPU mainly deals with operating system, user interface and application program, etc.
  • GPU is used for rendering and rendering of display content
  • modem is used for handling wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 610, and may be implemented by a communication chip alone.
  • the memory 620 may include a random access memory (Random Access Memory, RAM) or a read-only memory (Read-Only Memory).
  • the memory 620 may be used to store instructions, programs, codes, code sets, or instruction sets.
  • the memory 620 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system, instructions for implementing at least one function, instructions for implementing various method embodiments described above, and the like.
  • the storage data area can also be data created by the electronic device in use (such as phonebook, audio and video data, chat history data), etc.
  • the electronic device 600 may further include a display screen for displaying the video to be displayed.
  • FIG. 14 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer readable storage medium 700 stores program code, and the program code can be called by a processor to execute the method described in the above method embodiments.
  • the computer-readable storage medium 700 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 700 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 700 has a storage space for the program code 710 that performs any of the method steps described above. These program codes can be read from or written into one or more computer program products.
  • the program code 710 may be compressed in an appropriate form, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请公开了一种视频处理方法、装置、电子设备及存储介质,涉及电子设备技术领域。其中,该方法包括:接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同。获取目标等级对应的增强处理方式。通过获取的增强处理方式对视频进行增强处理。在该方案中,不同等级的视频增强处理方式不同,对视频的画质的增强效果不同,实现视频的差异化处理,获得良好的处理效果,有效实现不同等级的超清视效果。

Description

视频处理方法、装置、电子设备及存储介质
本申请要求于2018年11月27日提交的申请号为CN201811427973.8的中国专利申请的优先权,在此通过引用将其全部内容并入本文。
技术领域
本申请涉及电子设备技术领域,更具体地,涉及一种视频处理方法、装置、电子设备及存储介质。
背景技术
随着科学技术的发展,电子设备已经成为人们日常生活中最常用的电子产品之一。并且,用户经常会通过电子设备看视频或玩游戏等,但是,目前电子设备对视频数据的处理方式固定,造成处理效果不理想,用户体验不佳。
发明内容
鉴于上述问题,本申请提出了一种视频处理方法、装置、电子设备及存储介质,以改善上述问题。
第一方面,本申请实施例提供了一种视频处理方法,所述方法包括:接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同;获取目标等级对应的增强处理方式;通过获取的增强处理方式对视频进行增强处理,所述增强处理通过调节视频的图像参数提高视频的视频帧的画质。
第二方面,本申请实施例提供了一种视频处理装置,所述装置包括:等级接收模块,用于接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同;处理方式获取模块,用于获取目标等级对应的增强处理方式;处理模块,用于通过获取的增强处理方式对视频进行增强处理,所述增强处理通过调节视频的图像参数提高视频的视频帧的画质。
第三方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;存储器;一个或多个程序。其中所述一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请实施例提供的视频播放的流程示意图。
图2示出了本申请一实施例提供的视频处理方法的流程图。
图3示出了本申请实施例提供的等级选择的一显示界面图。
图4示出了本申请实施例提供的等级选择的另一显示界面图。
图5示出了本申请实施例提供的等级选择的又一显示界面图。
图6示出了本申请另一实施例提供的视频处理方法的流程图。
图7示出了本申请实施例提供的一对应关系表。
图8示出了本申请实施例提供的另一对应关系表。
图9示出了本申请实施例提供的又一对应关系表。
图10示出了本申请又一实施例提供的视频处理方法的流程图。
图11示出了本申请再一实施例提供的视频处理方法的流程图。
图12示出了本申请实施例提供的视频处理装置的功能模块图。
图13示出了本申请实施例提供的电子设备的结构框图。
图14是本申请实施例的用于保存或者携带实现根据本申请实施例的视频处理方法的程序代码的存储单元。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
请参阅图1,图1示出了视频播放的流程。具体地,操作系统在获取到待播放的数据的时候,接下来的工作就是解析音视频数据。一般的视频文件都由视频流和音频流两部分组成,不同的视频格式音视频的封装格式不一样。将音频流和视频流合成文件的过程称为muxer,反之从媒体文件中分离音频流和视频流的过程称为demuxer。播放视频文件就需要从文件流中分离出音频流和视频流,分别对其进行解码,解码后的视频帧可以直接渲染,相应的音频可以送到音频输出设备的缓冲区进行播放,当然,视频渲染和音频播放的时间戳需要控制同步。其中,每个视频帧为视频对应的每一帧图像。
具体地,视频解码可以包括硬解码和软解码,硬件解码是将原来全部交由中央处理器(Central Processing Unit,CPU)来处理的视频数据的一部分交由图形处理器(Graphics Processing Unit,GPU)来做,而GPU的并行运算能力要远远高于CPU,这样可以大大的降低对CPU的负载,CPU的占用率降低了之后就可以同时运行一些其他的程序了,当然,对于较好的处理器来说,比如 i5 2320,或者AMD任何一款四核心处理器来说,既可以进行硬解码,也可以进行软解码。
具体地,如图1所示,多媒体框架(Media Framework)通过与客户端的API接口获取客户端待播放的视频文件,并交由视频编解码器(Video Decode)。其中,Media Framework为Android系统中多媒体框架,MediaPlayer、MediaPlayerService和Stagefrightplayer三个部分构成了Android多媒体的基本框架。多媒体框架部分采用了C/S的结构,MediaPlayer作为C/S结构的Client端,MediaPlayerService和Stagefrightplayer作为C/S结构Server端,承担着播放多媒体文件的责任,通过Stagefrightplayer,Server端完成Client端的请求并作出响应。视频解码器Video Decode是一款集成了最常用的音频和视频解码与播放的超级解码器,用于将视频数据解码。
其中,软解码,即通过软件让CPU来对视频进行解码处理。而硬解码,指不借助于CPU,而通过专用的子卡设备来独立完成视频解码任务。
不论是硬解码还是软解码,在将视频数据解码之后,会将解码后的视频数据发送至图层传递模块(SurfaceFlinger),如图1所示,硬解码后的视频数据通过视频驱动程序发送至SurfaceFlinger。SurfaceFlinger将解码后的视频数据渲染和合成之后,在显示屏上显示。其中,SurfaceFlinger是一个独立的Service,它接收所有Window的Surface作为输入,根据ZOrder、透明度、大小、位置等参数,计算出每个Surface在最终合成图像中的位置,然后交由HWComposer或OpenGL生成最终的显示Buffer,然后显示到特定的显示设备上。
如图1所示,软解码中,CPU将视频数据解码之后交给SurfaceFlinger渲染和合成,而硬解码由GPU解码之后,交由SurfaceFlinger渲染和合成。而SurfaceFlinger会调用GPU实现图像的渲染和合成,并在显示屏上显示。
为了获得良好的显示效果,可以对视频增进行增强处理,该增强处理可以在解码后进行,在增强处理后再进行渲染合成后在显示屏显示。其中,增强处理通过调节视频中视频帧的图像参数提高视频帧的画质,提高视频的显示效果,获得更良好的观看体验。其中,视频帧的画质可以包括清晰度、锐度、饱和度、细节、镜头畸变、色彩、解析度、色域范围以及纯度等参数,通过调节画质相关的各种参数使图像更符合人眼的观看偏好,用户观看体验更好。如视频的清晰度越高、噪声越小、细节越清晰、饱和度越高等,表示视频画质越好,用户观看体验更好。其中,对画质中不同组合的参数进行调整,代表了对视频的不同增强处理方式,每种增强处理方式中包括相应的图像处理算法,用于对视频帧进行图像处理以调节视频帧的图像参数,提高视频帧的画质。
但是,发明人经过研究发现,对视频的显示增强通常只有是和否两个选项,也就是说,要么对视频进行增强,要么对视频不增强,不具有根据用户实际需求设置的差异化增强。也就是说,不具有不同效果的增强选择,用户体验不佳。
下面将通过具体实施例对本申请实施例提供的视频处理方法、装置、电子设备及存储介质进行详细说明。
请参阅图2,示出了本申请实施例提供的视频处理方法。所述视频处理方法用于在接收到从多个不同等级中选择的目标等级时,获取该目标等级对应的增强处理方式。其中,不同等级对应的增强处理方式不同,从而使用户可以根据实际需求选择相应的增强处理方式,实现视频增强处理的差异化,提高用户体验。在具体的实施例中,所述视频处理方法应用于如图12所示的视频处理装置500以及配置有所述视频处理装置500的电子设备600(图13)。下面将以电子设备为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的电子设备可以为智能手机、平板电脑、桌上电脑、穿戴式电子设备、车载设备、网关等各种可以进行视频处理的设备,在此不做具体的限定。具体的,该方法包括:
步骤S110:接收从视频增强对应的多个不同等级中选择的目标等级。其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同。
电子设备可以对获取到视频数据通过解码、增强处理以及渲染合成后进行视频的显示。其中,电子设备可以从服务器获取视频数据、可以从本地获取视频数据,也可以从其他电子设备获取视频数据。
具体地,当视频数据由电子设备从服务器获取时,那么该视频数据可以是电子设备从服务器下载,或者电子设备从服务器在线获取。例如,视频数据可以是电子设备通过安装的视频播放软件下载,或者在该视频播放软件在线获取的视频数据。其中,该服务器可以为云服务器。当视频数据从电子设备的本地获取时,该视频数据可以是电子设备预先下载并存储在本地存储器中的。当视频数据由电子设备从其他电子设备获取时,该视频数据可以由其他电子设备通过无线通信协议传输至所述电子设备,例如,通过WLAN协议、蓝牙协议、ZigBee协议或者WiFi协议等,也可以由其他电子设备通过数据网络传输至所述电子设备,例如,2G网络、3G网络或者4G网络等,在此不做限定。
电子设备获取到视频数据,再对该视频数据进行解码以及渲染合成等处理后,通过显示器进行播放。若接收到视频增强的相关控制指令,则对视频数据进行增强处理,播放视频增强后的视频。
其中,对视频的增强可以设置多个不同的等级,并且,不同等级的视频增强,对应的增强处理方式不同,相应的,不同等级的视频增强,其对应的增强处理方式对视频的增强画质不同。在本申请实施例中,设置的等级的数量并不限定,可以仅设置两个等级,也可以设置三个或者三个以上的等级。
用户可以从设置的多个不同等级中选择一个等级对视频进行增强,也就是说,用户选择一个等级,以通过该等级对应的增强处理方式对视频进行增强处理,实现该等级对应的画质增强效果。在本申请实施例中,定义选择的用于进行增强处理的等级为目标等级。
作为一种实施方式,从视频增强对应的多个不同等级中选择的目标等级,可以是由电子设备在视频开启时确定。例如,播放视频的应用程序的默认设置为开启某一等级的视频增强,则在开启视频时,以该等级作为选择的目标等级。又如,播放视频的应用程序在前一次关闭时开启的为某一等级的视频增强,则再次打开该应用程序中的视频时确定接收到该等级的视频增强,以该等级作为目标等级。或者是该视频在前一次关闭时开启的某一等级的视频增强,则再次打开该视频时,以该等级作为选择的目标等级。
作为一种实施方式,该目标等级也可以是在视频播放过程中接收到的用户选择确定。电子设备可以接收用户的选择指令,将与所述选择指令对应的视频增强的等级作为所述目标等级。以视频增强对应两个不同等级为例,该两个等级分别为高等级以及低等级,如图3所示,设置高等级的视频增强开关以及低等级的视频增强开关,分别对应高等级的视频增强以及低等级的视频增强。在图3所示的视频增强开关中,若接收到对低等级的增强开关的开启选择,如图4所示,该开启选择为用户的选择指令,该选择指令对应的视频增强的等级为低等级,则接收到从视频增强对应的多个不同等级中选择的目标等级为低等级。又如图5所示,设置可以进行高等级以及低等级互相切换的视频增强开关,若用户将视频增强开关切换到如图5所示低等级增强,该开启切换的操作为用户的选择指令,该选择指令对应的视频增强的等级为低等级,则接收到从视频增强对应的多个不同等级中选择的目标等级为低等级。其中,在视频播放过程中,设置的视频增强开关可以处于隐藏状态。当接收到对视频的点击等触控,显示该视频增强开关,并且使该视频增强开关处于可控制状态。当视频超过一段时间未接收到用户的触控操作,再次将该视频增强开关隐藏。
在本申请实施例中,选择目标等级的方式还可以包括其他。可以是获取电子设备的电量,通过电子设备的电量与目标电量之间的大小关系选择目标等级。例如,多个不同等级中,包括增强处理时耗电量低的等级,该耗电量低的等级在增强处理时的耗电量低于其他等级在增强处理时的电量。作为一种实施方式,可以在对视频进行增强处理的情况下,获取电子设备的电量,判断电子设备的电量是否小于目标电量。其中,该增强处理为耗电量高的等级对应的增强处理。若判定电子设备的电量小于目标电量,以耗电量低的等级作为选择的目标等级。其中,该目标电量的具体电量值在本申请实施例中并不限定,可以是电子设备总电量的百分之三十、百分之二十等。另外,该目标电量也可以由用户设置后存储于电子设备。
该实施方式在电子设备中是否实施可以由用户设置确定。具体的,用户可以设置是否在低电量时切换低功耗的增强处理。若用户设置低电量时低功耗增强,则可以在对视频进行增强处理的情况下,获取电子设备的电量;判断电子设备的电量是否小于目标电量。在电子设备的电量小于目标电量时,以耗电量低的等级作为选择的目标等级。
步骤S120:获取目标等级对应的增强处理方式。
不同等级的视频增强对应不同的增强处理方式。在确定目标等级的情况下,可以根据等级与增强处理方式的对应关系,获取该目标等级对应的增强处理方式。
步骤S130:通过获取的增强处理方式对视频进行增强处理,所述增强处理通过调节视频的图像参数提高视频的视频帧的画质。
每种增强处理方式包括要实现相应增强处理效果的图像处理算法。在确定目标等级对应的增强处理方式的情况下,通过该增强处理方式包括的图像处理算法对视频进行增强处理,以通过增强处理方式包括的图像处理算法调整视频帧的图像参数,从而调整视频帧画质的相关参数,提高视频画质。
由于视频是由一帧一帧视频帧组成,不同的视频帧即为不同的图片,各种增强处理方式对视频的增强,实际则为对视频帧的画质增强。具体的,通过增强处理方式对视频的增强处理,为通过该增强处理方式包括的图像处理算法对该各个视频帧对应的视频数据进行图像处理。
在本申请实施例中,对视频的增强处理可以设置多个不同的等级。在接收到将某个等级选择为目标等级的情况下,通过该目标等级对应的增强处理方式对视频进行增强处理。本实施例中,对增强处理方式的选择可以是多样的,根据需求选择视频增强的等级,实现视频增强的差异化,提高视频增强的用户体验。
本申请另一实施例提供了一种视频处理方法,根据等级与增强处理方式的对应关系选择目标等级对应的增强处理方式。具体的,请参见图6,该方法包括:
步骤S210:接收从视频增强对应的多个不同等级中选择的目标等级。其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同。
本步骤可以参见步骤S110,在此不再赘述。
步骤S220:从不同等级与增强处理方式的对应关系表中查找所述目标等级对应的等级参数。
步骤S230:确定查找到的等级参数对应的增强处理方式作为所述目标等级对应的增强处理方式。
在本申请实施例中,不同等级的视频增强,其处理效果不同,视频画质不同。具体的,每种增强处理方式包括一种或多种图像处理算法,以实现相应的处理效果。
可选的,视频增强的不同等级可以是高低不同的等级,等级越高,视频增强的效果越好,从而使用户可以通过选择高低不同的等级选择增强效果更好或者一般的增强处理方式。具体的,可以是视频增强的等级不同,对应的增强处理方式包括的图像处理算法不同,使所述视频增强的等级越高,对应的增强处理方式对视频的增强画质越好。该画质越好可以包括清晰度越高、噪声越小、 细节约强越清晰、饱和度越高等,视频画质越好,用户观看体验更好。
在本申请实施例中,可以是,等级越高的视频增强,包括的不同图像处理目的的图像处理算法种类越多。例如,视频增强的等级包括等级依次升高的第三等级、第二等级以及第一等级,则第三等级到第一等级依次包括的图像处理算法种类依次增多。如,常用的对视频的画质进行增强的图像处理算包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法、细节调整的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法等多种图像处理算法。第一等级的视频增强可以包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法、细节调整的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法。第二等级的视频增强包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法以及细节调整的图像处理算法。第三等级的视频增强包括去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法。
另外,也可以是,不同等级的视频增强,其对应的增强处理方式中图像处理算法的参数设置不同,从而使等级越高,增强画质越好。或者是,等级越高的增强处理方式,包括的图像处理算法精度越高。
在本申请实施例中,可以通过等级与增强处理方式之间的对应关系表获取目标等级对应的增强处理方式。
具体的,每个等级可以对应有等级参数,也就是说,不同的等级参数对应不同的等级,每个等级参数表示一个视频增强等级。电子设备中可以存储有等级参数与增强处理方式的对应关系表,该对应关系表可以是电子设备下载视频应用程序时同时下载,可以是下载视频增强的插件时下载,可以是电子设备的系统进行更新时下载或者更新,可以是在服务器有新的对应关系表时向电子设备推送,也可以是电子设备定时从服务器请求获取,或者是电子设备在需要使用该对应关系表,如需要通过该对应关系表查找目标等级对应的增强处理方式时,从服务器获取。该对应关系表具体如何获取以及何时获取在本申请实施例中并不限定。
在该对应关系表中,每个等级参数对应有增强处理方式,与等级参数对应的增强处理方式为该等级参数对应的等级进行增强处理的增强处理方式。其中,在对应关系表中增强处理方式也可以通过方式参数进行表示。
在确定一个等级为目标等级时,对应获取到该目标等级对应的等级参数。再从对应关系表中查找该等级参数,确定查找到的等级参数对应的增强处理方式为目标等级对应的增强处理方式。
例如,视频增强的等级包括第一等级、第二等级以及第三等级,对应关系表如图7所示,三个等级参数A、B、C分别对应增强处理方式a、b、c。其中,三个等级参数A、B、C分别表示第一等级、第二等级以及第三等级,其 中,等级参数A表示第一等级,等级参数B表示第二等级,等级参数C表示第三等级。a、b以及c分别表示不同的增强处理方式。则若接收到目标等级为第一等级,则可以从对应关系表中查找等级参数A。当查找到等级参数A,再查找该对应关系表中A对应的增强处理方式为a,从而可以获取到选择的目标等级对应的增强处理方式为a。
在确定目标等级对应的增强处理方式后,则可以确定该增强处理方式对应的图像处理算法。可选的,可以是,每种增强处理方式对应一个方式参数,该方式参数对应一个或多个算法身份参数,每个算法身份参数表示一个图像处理算法。则在确定增强处理方式后,可以获取所述增强处理方式对应的方式参数;通过该方式参数获取对应的算法身份参数,由于每个算法身份参数表示一个图像处理算法,则可以确定与所述增强处理方式对应的图像处理算法。进而可以通过获取的算法身份参数表示的图像处理算法对视频进行增强处理,即调用该增强处理方式对应的图像处理算法对视频进行增强处理。可选的,每种增强处理方式所对应的图像处理算法,也可以封装为一个程序模块。确定增强处理方式后,调用该增强处理方式对应的程序模块。
对于不同分辨率的视频,具有不同的特点。例如,对于标清以及流畅视频,其分辨率较低,视频中噪声严重,视频帧中图像模糊,图像边缘不清晰,具有边缘噪声。而对于高清等高分辨率的视频,噪声小,图像清晰。由于分辨率高的视频以及分辨率低的视频,视频帧中图像特点不同,对于不同分辨率的视频,同一等级的视频增强,若都采用相同的增强处理方式进行处理,则可能处理后视频的画质完全不同,并且可能对视频的增强效果不理想。例如,锐化可以使高清的图像更清晰,但是锐化和去噪相矛盾,在低分辨率的视频中,若进行锐化处理,可能反而放大了边缘噪声。因此,对于不同分辨率的视频,要通过增强实现相同或者相似的视频画质,需要不同的图像处理算法,对应的,不同分辨率的视频,增强处理方式。
可选的,本申请实施例中,可以根据不同分辨率的特点,对应视频的不同分辨率设置不同的对应关系表。在不同对应关系表中,相同的等级参数对应的增强处理方式不同。也就说,对于同一等级的视频增强,若视频的分辨率不同,可以选取不同的增强处理方式。
具体的,电子设备可以根据视频数据获取视频的分辨率。其中,视频的分辨率是用于度量视频帧中数据量多少的一个参数,可以通过W*H的形式表示,其中,W是指视频帧在横向上的有效像素,H是指视频帧在纵向上的有效像素。
获取视频的分辨率的方式可以是,电子设备对视频进行解码,获得解码后的视频数据,所述视频数据包括视频的分辨率,从所述视频数据中提取视频的分辨率。具体的,在解码后的视频数据中,具有对应存储分辨率的数据部分,该数据部分可以为一段数据。因此,可以从视频的解码后的数据中,获取到分 辨率对应的数据部分,再从分辨率对应的数据部分获取视频的分辨率。
例如,H.264,同时也是MPEG-4第十部分,是由ITU-T视频编码专家组(VCEG)和ISO/IEC动态图像专家组(MPEG)联合组成的联合视频组(JVT,Joint Video Team)提出的高度压缩数字视频编解码器标准。对于采用H.264编码的码流,码流的流信息中包括视频的分辨率,而码流的流信息都存储在特殊的结构中,叫做SPS(Sequence Parameter Set),该SPS即为解码后数据中分辨率对应的数据部分。根据H.264码流的格式信息,在H.264码流中,以0x000x00 0x01或者0x00 0x00 0x00 0x01为开始码,因此通过检测开始码后第一个字节的后五位是否为7(00111)来判断其是否为SPS。获得SPS之后,就可以解析出视频的分辨率。其中,SPS中有两个成员,pic_width_in_mbs_minus1和pic_height_in_map_units_minus_1,分别表示图像的宽和高,并且都是以16为单位(在面积上以16*16的块为单位)再减1,所以实际的宽是(pic_width_in_mbs_minus1+1)*16,高为(pic_height_in_map_units_minus_1+1)*16,即对应上述分辨率中W的为(pic_width_in_mbs_minus1+1)*16,对应上述分辨率中H的为(pic_height_in_map_units_minus_1+1)*16。
在获取到视频的分辨率后,可以确定该分辨率对应的对应关系表,以确定的对应关系表作为查找等级参数的对应关系表。也就说,在视频的分辨率对应的对应关系表中查找目标等级对应的等级参数,再获取该等级参数对应的增强处理方式作为目标等级对应的增强处理方式。
例如,视频增强的等级包括第一等级、第二等级以及第三等级,分辨率包括第一分辨率以及第二分辨率。第一分辨率对应的对应关系表如图8所示,三个等级参数A1、B1、C1分别对应增强处理方式a1、b1、c1。第二分辨率对应的对应关系表如图9所示,三个等级参数A2、B2、C2分别对应增强处理方式a2、b2、c2。则若接收到目标等级为第一等级,且视频的分辨率为第一分辨率,则可以获取到目标等级对应的增强处理方式为a1;若接收到目标等级为第一等级,且视频的分辨率为第二分辨率,则可以获取到目标等级对应的增强处理方式为a2。
在本申请实施例中,对应不同分辨率的不同对应关系表中,每个等级对应的增强处理方式所包括的图像处理算法并不限定。
例如,可以分别设置第一分辨率表示低分辨率,第二分辨率表示高分辨率。其中,第一分辨率以及第二分辨率分别表示的分辨率可以不止一种。例如,第一分辨率可以为标清视频以及流畅视频对应的分辨率,如,第一分辨率为240p、360p以及480p。其中,240p表示分辨率最低是480x240;360p表示分辨率最低是640x360;480p表示分辨率最低是720*480。第二分辨率可以是高清视频以及超清视频对应的分辨率,如,第二分辨率可以为720p、1080p,其中,720p表示分辨率最低是1280x720;1080p表示分辨率最低是1920x1080 等。
对于低分辨率的视频,视频噪声大,显示较为模糊。具体的,低分辨率的视频有效像素较少,放大时有效像素间的距离拉大,而电子设备会把有效像素之间的空间通过插值填满,而插值所用的像素是根据上下左右的有效像素计算出来,并非真正的视频信息,从而使显示的视频图像中具有较多的非视频本身的信息,使视频中具有较大的噪声。特别的,对应视频帧中图像的边缘,通过计算获得的插值像素使边缘产生像素块,即出现块效应,也就是马赛克,从而边缘较为模糊,不够清晰,形成边缘噪声。因此,在第一分辨率对应的对应关系表中,增强处理方式所包括的图像处理算法可以偏重去块效应、去边缘锯齿以及去条带等。
另外,可选的,对于低分辨率的视频,其视频帧中边缘噪声大,因此,可以,在第一分辨率对应的对应关系表中,增强处理方式所包括的图像处理算法还可以包括弱化细节的图像处理算法以及降低饱和度的图像处理算法,以降低边缘噪声。
例如,在第一分辨率对应的对应关系表中,每一等级对应的增强处理方式中,都可以包括去块效应、去边缘锯齿以及去条带中的一种或多种图像处理算法。如,视频增强的等级越高,使增强处理的画质越好,则在第一分辨率对应的关系表中,第一等级对应的增强处理方式可以包括提高亮度的图像处理算法、降低饱和度的图像处理算法、提高对比度的图像处理算法、弱化细节的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法;第二等级对应的增强处理方式可以包括提高亮度的图像处理算法、降低饱和度的图像处理算法、提高对比度的图像处理算法以及弱化细节的图像处理算法中的部分,还包括去块效应的图像处理算法、去边缘锯齿的图像处理算法去条带的图像处理算法;第三等级对应的增强处理方式可以仅包括去块效应的图像处理算法、去边缘锯齿的图像处理算法去条带的图像处理算法。
而对于高分辨率的视频,其有效像素多,在电子设备的显示屏显示时,更多的显示的是视频本身的图像信息,视频清晰,噪声较低。因此,在第二分辨率对应的对应关系表中,增强处理方式所包括的图像处理算法可以偏重提高亮度的图像处理算法、提高饱和度的图像处理算法、提高对比度的图像处理算法以及细节增强的图像处理算法等。例如,在第二分辨率对应的对应关系表中,每一等级对应的增强处理方式中,都可以包括提高亮度的图像处理算法、提高饱和度的图像处理算法、提高对比度的图像处理算法以及细节增强的图像处理算法中的一种或多种图像处理算法。如,视频增强的等级越高,使增强处理的画质越好,则在第二分辨率对应的关系表中,第一等级对应的增强处理方式可以包括提高亮度的图像处理算法、提高饱和度的图像处理算法、提高对比度的图像处理算法以及细节增强的图像处理算法、去块效应的图像处理算法、去边 缘锯齿的图像处理算法以及去条带的图像处理算法;第二等级对应的增强处理方式可以包括去块效应的图像处理算法、去边缘锯齿的图像处理算法去条带的图像处理算法中的部分,还包括提高亮度的图像处理算法、提高饱和度的图像处理算法、提高对比度的图像处理算法以及细节增强的图像处理算法;第三等级对应的增强处理方式可以仅包括提高亮度的图像处理算法、提高饱和度的图像处理算法、提高对比度的图像处理算法以及细节增强的图像处理算法中的部分或全部。
步骤S240:通过获取的增强处理方式对视频进行增强处理。
以目标等级对应的增强处理方式所包括的图像处理算法对视频中进行处理,获得目标等级对应的增强处理效果。
本申请实施例中,根据等级与增强处理方式的对应关系确定目标等级所对应的增强处理方式。对于不同分辨率的视频,在相同等级下,选择到的处理方式不同,实现不同分辨率的视频的差异化处理,获得更好的视频处理效果。
本申请还提供了一种实施例,该实施例中,不同等级的增强处理方式可以由用户设置。也就是说,不同等级对应的图像处理算法可以由用户设置,以更贴近用户的需求差异化增强处理视频。具体的,请参见图10,本申请实施例提供的方法包括:
步骤S310:接收算法设置请求,所述算法设置请求中包括进行算法设置的等级。
对于视频增强的各个等级,可以提供设置入口供用户设置增强处理方式,具体的,设置各个等级对应的图像处理算法。
用户可以通过设置入口进入设置界面,对各个等级进行算法的设置。对应的,可以接收用户对应各个等级提交的算法设置请求。例如,用户提交对某一等级进行算法设置的算法设置请求,该算法设置请求携带该等级的等级参数。则接收到算法设置请求时,可以根据该等级参数确定用户想要进行算法设置的等级。
步骤S320:显示多种图像处理算法。
在显示屏显示多种用户可以选择的图像处理算法。其中,由于用户对效果感知更明显,而非算法本身的名字,因此显示的可以是通过处理效果说明的图像处理算法。例如,通过环路去块滤波算法去除块效应,但是若显示的图像处理算法为环路去块滤波,用户可能并不明白该图像处理算法的处理目的,因此,可以以功能性说明进行显示,如显示去块效应或者去马赛克。
具体显示哪些处理算法在本申请实施例中并不限定,例如可以显示提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法、细节调整的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法等。
其中,响应于各个等级的算法设置请求,显示的图像处理算法可以相同。 可选的,为了体现不同等级之间的差异性,不同等级的算法设置请求,显示的图像处理算法也可以有不同。
步骤S330:接收从所述多种图像处理算法中选择的任意一种或多种。
步骤S340:以选择的图像处理算法作为所述算法设置请求中等级对应的增强处理方式包括的图像处理算法。
用户可以根据实际处理需求从显示的图像处理算法中进行选择,选择一种或者多种并不限制。当用户选择完成后,可以通过完成、确定等触发确定选择的按键提交图像处理算法的选择。当接收到用户从所述多种图像处理算法中选择的任意一种或多种,以用户选择的图像处理算法作为进行设置的等级对应的图像处理算法,以使用户可以根据自身偏好进行各个等级的差异化设置。
例如,若用户想要对第一等级的增强处理方式进行设置,则可以在设置界面选择第一等级。在第一等级的设置界面显示提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法、细节调整的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法。若用户确认选择图像处理算法为调整饱和度的图像处理算法以及去块效应的图像处理算法,则将第一等级对应的增强处理方式包括的图像处理算法设置为调整饱和度的图像处理算法以及去块效应的图像处理算法。
步骤S350:接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同。
步骤S360:获取目标等级对应的增强处理方式。
步骤S370:通过获取的增强处理方式对视频进行增强处理。
在获取目标等级对应的增强处理方式时,获取的是用户设置的对应该目标等级的增强处理方式,使根据目标等级进行增强时,实现的是用户想要的增强效果,更好地满足不同用户的个体增强偏好。
本申请还提供了一种实施例,该实施例中,可以由用户从目标等级对应的图像处理算法中选择具体想要的增强效果对应的增强算法。具体的,请参见图11,该实施例提供的方法包括:
步骤S410:接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同。
步骤S420:显示所述目标等级对应的多种图像处理算法。
对应每个等级,可以设置多种图像处理算法。当接收到从不同等级中选择的目标等级,可以显示目标等级对应的图像处理算法。用户可以从多种图像处理算法中进行选择。具体显示时,也可以通过功能性描述进行显示。
步骤S430:接收对所述多种图像处理算法中任意一种或多种的选择。
步骤S440:以选择的图像处理算法作为所述目标等级对应的增强处理方式包括的图像处理算法。
当用户从目标等级对应的多种图像处理算法中选择确定一种或者多种图 像处理算法,则以用户选择的图像处理算法作为该目标等级对应的增强处理方式包括的图像处理算法。也就是说,目标等级对应的增强处理方式所包括的图像处理算法,为用户对应目标等级选择的图像处理算法。
步骤S450:通过获取的增强处理方式对视频进行增强处理。
通过目标等级对应的增强处理方式对视频进行处理,具体对视频进行处理的图像处理算法为用户从目标等级对应的多种图像处理算法中选择的图像处理算法。
本申请实施例中,在选定等级后,用户可以再从该目标等级对应的图像处理算法中选择,以选择在符合当前视频处理需求的图像处理算法。例如电子设备当前的电量较低,用户可以选择较少的图像处理算法,满足基本的视频增强需求,以降低视频增强时的耗电量,如对于低分辨率的视频,用户可以只选择去边缘锯齿的图像处理算法。
本申请实施例还提供了一种视频处理装置,请参见图12,该装置500包括:等级接收模块510,用于接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同。处理方式获取模块520,用于获取目标等级对应的增强处理方式。处理模块530,用于通过获取的增强处理方式对视频进行增强处理,所述增强处理通过调节视频的图像参数提高视频的视频帧的画质。
可选的,该装置还可以包括设置模块,用于显示多种图像处理算法;接收从所述多种图像处理算法中选择的任意一种或多种;以选择的图像处理算法作为所述算法设置请求中等级对应的增强处理方式包括的图像处理算法。
可选的,处理方式获取模块520可以包括,算法显示单元,用于显示所述目标等级对应的多种图像处理算法;选择接收单元,用于接收对所述多种图像处理算法中任意一种或多种的选择;处理方式确定单元,用于以选择的图像处理算法作为所述目标等级对应的增强处理方式包括的图像处理算法。
可选的,电子设备可以存储有等级参数与增强处理方式的对应关系表,不同的等级参数对应不同的等级。处理方式获取模块520可以包括参数查找单元,用于从所述对应关系表中查找所述目标等级对应的等级参数;方式确定单元,用于确定查找到的等级参数对应的增强处理方式作为所述目标等级对应的增强处理方式。
可选的,视频的不同分辨率对应不同的对应关系表,不同对应关系表中,相同的等级参数对应的增强处理方式不同。该处理方式获取模块520还可以包括,分辨率获取单元,用于获取视频的分辨率;关系表确定单元,用于确定所述分辨率对应的对应关系表,以确定的对应关系表作为查找等级参数的对应关系表。
可选的,在本申请实施例中,视频增强的等级不同,对应的增强处理方式包括的图像处理算法不同,使所述视频增强的等级越高,对应的增强处理方式 对视频的增强画质越好。
其中,可以是,第一等级的视频增强包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法、细节调整的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法;第二等级的视频增强包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法以及细节调整的图像处理算法;第三等级的视频增强包括去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法,其中,第三等级、第二等级以及第一等级的等级依次升高。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述的各个方法实施例之间可以相互参照;上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
请参考图13,其示出了本申请实施例提供的一种电子设备600的结构框图。该电子设备600可以是智能手机、平板电脑等能够进行视频处理的电子设备。该电子设备一个或多个处理器610(图中仅示出一个),存储器620以及一个或多个程序。其中,所述一个或多个程序被存储在所述存储器620中,并被配置为由所述一个或多个处理器610执行。所述一个或多个程序配置用于执行前述实施例所描述的方法。
处理器610可以包括一个或者多个处理核。处理器610利用各种接口和线路连接整个电子设备600内的各个部分,通过运行或执行存储在存储器620内的指令、程序、代码集或指令集,以及调用存储在存储器620内的数据,执行电子设备600的各种功能和处理数据。可选地,处理器610可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器610可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器610中,单独通过一块通信芯片进行实现。
存储器620可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器620可用于存储指令、程序、代码、代码集或指令集。存储器620可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令、用于实现上述各个方法实施例的指令等。存储数据区还可以电子设备在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
另外,该电子设备600还可以包括显示屏,用于对待显示视频进行显示。
请参考图14,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读存储介质700中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质700可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质700包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质700具有执行上述方法中的任何方法步骤的程序代码710的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码710可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种视频处理方法,其特征在于,所述方法包括:
    接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同;
    获取目标等级对应的增强处理方式;
    通过获取的增强处理方式对视频进行增强处理,所述增强处理通过调节视频的图像参数提高视频的视频帧的画质。
  2. 根据权利要求1所述的方法,其特征在于,所述接收从视频增强对应的多个不同等级中选择的目标等级之前,还包括:
    接收算法设置请求,所述算法设置请求中包括进行算法设置的等级;
    显示多种图像处理算法;
    接收从所述多种图像处理算法中选择的任意一种或多种;
    以选择的图像处理算法作为所述算法设置请求中等级对应的增强处理方式包括的图像处理算法。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    显示所述图像处理算法对应的处理效果。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述接收从视频增强对应的多个不同等级中选择的目标等级,包括:
    在所述视频开启时,确定播放所述视频的应用程序默认设置的等级作为所述目标等级。
  5. 根据权利要求1-3任一项所述的方法,其特征在于,所述接收从视频增强对应的多个不同等级中选择的目标等级,包括:
    接收用户的选择指令;
    将与所述选择指令对应的视频增强的等级作为所述目标等级。
  6. 根据权利要求1-3任一项所述的方法,其特征在于,预先设置有目标电量,所述接收从视频增强对应的多个不同等级中选择的目标等级,包括:
    获取电子设备的电量;
    判断所述电子设备的电量是否小于目标电量;
    若小于,则将在增强处理时耗电量最低的等级作为所述目标等级。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述获取目标等级对应的增强处理方式,包括:
    显示所述目标等级对应的多种图像处理算法;
    接收对所述多种图像处理算法中任意一种或多种的选择;
    以选择的图像处理算法作为所述目标等级对应的增强处理方式包括的图像处理算法。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    显示所述目标等级对应的多种图像处理算法对应的处理效果。
  9. 根据权利要求1-6任一项所述的方法,其特征在于,存储有等级参数与增强处理方式的对应关系表,不同的等级参数对应不同的等级,所述获取目标等级对应的增强处理方式,包括:
    从所述对应关系表中查找所述目标等级对应的等级参数;
    确定查找到的等级参数对应的增强处理方式作为所述目标等级对应的增强处理方式。
  10. 根据权利要求9所述的方法,其特征在于,视频的不同分辨率对应不同的对应关系表,不同对应关系表中,相同的等级参数对应的增强处理方式不同,所述从所述对应关系表中查找所述目标等级对应的等级参数之前,还包括:
    获取视频的分辨率;
    确定所述分辨率对应的对应关系表,以确定的对应关系表作为查找等级参数的对应关系表。
  11. 根据权利要求10所述的方法,其特征在于,所述获取视频的分辨率,包括:
    对所述视频进行解码,获得解码后的视频数据,所述视频数据包括视频的分辨率;
    从所述视频数据中提取视频的分辨率。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同,包括:
    视频增强的等级不同,对应的增强处理方式包括的图像处理算法不同,使所述视频增强的等级越高,对应的增强处理方式对视频的增强画质越好。
  13. 根据权利要求12所述的方法,其特征在于,所述视频增强的等级不同,对应的增强处理方式包括的图像处理算法不同,包括:
    第一等级的视频增强包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法、细节调整的图像处理算法、去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法;
    第二等级的视频增强包括提高亮度的图像处理算法、调整饱和度的图像处理算法、调整对比度的图像处理算法以及细节调整的图像处理算法;
    第三等级的视频增强包括去块效应的图像处理算法、去边缘锯齿的图像处理算法以及去条带的图像处理算法,其中,第三等级、第二等级以及第一等级的等级依次升高。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,预先设置有每种增强处理方式对应的方式参数,所述方式参数对应一个或多个算法身份参数,所述通过获取的增强处理方式对视频进行增强处理,包括:
    获取与所述增强处理方式对应的方式参数;
    获取所述方式参数对应的算法身份参数,其中,每个算法身份参数表示一个图像处理算法;
    通过获取的所述算法身份参数表示的图像处理算法对视频进行增强处理。
  15. 一种视频处理装置,其特征在于,所述装置包括:
    等级接收模块,用于接收从视频增强对应的多个不同等级中选择的目标等级,其中,视频增强的等级不同,对应的增强处理方式对视频的增强画质不同;
    处理方式获取模块,用于获取目标等级对应的增强处理方式;
    处理模块,用于通过获取的增强处理方式对视频进行增强处理,所述增强处理通过调节视频的图像参数提高视频的视频帧的画质。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括:
    设置模块,用于接收算法设置请求,所述算法设置请求中包括进行算法设置的等级;显示多种图像处理算法;接收从所述多种图像处理算法中选择的任意一种或多种;以选择的图像处理算法作为所述算法设置请求中等级对应的增强处理方式包括的图像处理算法。
  17. 根据权利要求15所述的装置,其特征在于,所述处理方式获取模块还包括:
    显示单元,用于显示所述目标等级对应的多种图像处理算法;
    接收单元,用于接收对所述多种图像处理算法中任意一种或多种的选择;
    获取单元,用于以选择的图像处理算法作为所述目标等级对应的增强处理方式包括的图像处理算法。
  18. 根据权利要求15所述的装置,其特征在于,存储有等级参数与增强处理方式的对应关系表,不同的等级参数对应不同的等级,所述处理方式获取模块还包括:
    查找单元,用于从所述对应关系表中查找所述目标等级对应的等级参数;
    确定单元,用于确定查找到的等级参数对应的增强处理方式作为所述目标等级对应的增强处理方式。
  19. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个程序,其中所述一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-14任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-14任一项所述的方法。
PCT/CN2019/109855 2018-11-27 2019-10-08 视频处理方法、装置、电子设备及存储介质 WO2020108091A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811427973.8 2018-11-27
CN201811427973.8A CN109640167B (zh) 2018-11-27 2018-11-27 视频处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020108091A1 true WO2020108091A1 (zh) 2020-06-04

Family

ID=66069735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/109855 WO2020108091A1 (zh) 2018-11-27 2019-10-08 视频处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN109640167B (zh)
WO (1) WO2020108091A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4216539A3 (en) * 2022-01-19 2023-11-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640167B (zh) * 2018-11-27 2021-03-02 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备及存储介质
CN112464691A (zh) * 2019-09-06 2021-03-09 北京字节跳动网络技术有限公司 图像处理方法及装置
CN110662115B (zh) * 2019-09-30 2022-04-22 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及存储介质
CN111954285A (zh) * 2020-08-05 2020-11-17 Oppo广东移动通信有限公司 省电控制方法及装置、终端及可读存储介质
CN113507643B (zh) * 2021-07-09 2023-07-07 Oppo广东移动通信有限公司 视频处理方法、装置、终端及存储介质
CN114501139A (zh) * 2022-03-31 2022-05-13 深圳思谋信息科技有限公司 一种视频处理方法、装置、计算机设备和存储介质
CN114827723B (zh) * 2022-04-25 2024-04-09 阿里巴巴(中国)有限公司 视频处理方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122985A1 (en) * 2006-11-29 2008-05-29 Ipera Technology, Inc. System and method for processing videos and images to a determined quality level
US20140169484A1 (en) * 2012-09-25 2014-06-19 Samsung Electronics Co., Ltd. Video decoding apparatus and method for enhancing video quality
CN104202604A (zh) * 2014-08-14 2014-12-10 腾讯科技(深圳)有限公司 视频增强的方法和装置
CN105592322A (zh) * 2014-09-19 2016-05-18 青岛海尔电子有限公司 一种媒体数据的优化方法及装置
CN105874782A (zh) * 2014-01-03 2016-08-17 汤姆逊许可公司 用于当呈现视频内容时优化至超高清分辨率的画质增强方法、装置和计算机程序产品
CN108391139A (zh) * 2018-01-15 2018-08-10 上海掌门科技有限公司 一种用于视频直播中的视频增强方法、介质及设备
CN109640167A (zh) * 2018-11-27 2019-04-16 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100259683A1 (en) * 2009-04-08 2010-10-14 Nokia Corporation Method, Apparatus, and Computer Program Product for Vector Video Retargeting
CN102724467B (zh) * 2012-05-18 2016-06-29 中兴通讯股份有限公司 提升视频输出清晰度的方法及终端设备
CN107277301B (zh) * 2016-04-06 2019-11-29 杭州海康威视数字技术股份有限公司 监控视频的图像分析方法及其系统
CN107659828B (zh) * 2017-10-30 2020-01-14 Oppo广东移动通信有限公司 视频画质调整方法、装置、终端设备及存储介质
CN108810649B (zh) * 2018-07-12 2021-12-21 深圳创维-Rgb电子有限公司 画质调节方法、智能电视机及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122985A1 (en) * 2006-11-29 2008-05-29 Ipera Technology, Inc. System and method for processing videos and images to a determined quality level
US20140169484A1 (en) * 2012-09-25 2014-06-19 Samsung Electronics Co., Ltd. Video decoding apparatus and method for enhancing video quality
CN105874782A (zh) * 2014-01-03 2016-08-17 汤姆逊许可公司 用于当呈现视频内容时优化至超高清分辨率的画质增强方法、装置和计算机程序产品
CN104202604A (zh) * 2014-08-14 2014-12-10 腾讯科技(深圳)有限公司 视频增强的方法和装置
CN105592322A (zh) * 2014-09-19 2016-05-18 青岛海尔电子有限公司 一种媒体数据的优化方法及装置
CN108391139A (zh) * 2018-01-15 2018-08-10 上海掌门科技有限公司 一种用于视频直播中的视频增强方法、介质及设备
CN109640167A (zh) * 2018-11-27 2019-04-16 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4216539A3 (en) * 2022-01-19 2023-11-15 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program

Also Published As

Publication number Publication date
CN109640167A (zh) 2019-04-16
CN109640167B (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
WO2020108091A1 (zh) 视频处理方法、装置、电子设备及存储介质
CN109983757B (zh) 全景视频回放期间的视图相关操作
KR102525578B1 (ko) 부호화 방법 및 그 장치, 복호화 방법 및 그 장치
CN111580765B (zh) 投屏方法、投屏装置、存储介质、被投屏设备与投屏设备
CN109729405B (zh) 视频处理方法、装置、电子设备及存储介质
CN109983500B (zh) 重新投影全景视频图片的平板投影以通过应用进行渲染
WO2020107989A1 (zh) 视频处理方法、装置、电子设备以及存储介质
US10666863B2 (en) Adaptive panoramic video streaming using overlapping partitioned sections
CN109660821B (zh) 视频处理方法、装置、电子设备及存储介质
US11483475B2 (en) Adaptive panoramic video streaming using composite pictures
CN109379624B (zh) 视频处理方法、装置、电子设备及存储介质
CN109168065B (zh) 视频增强方法、装置、电子设备及存储介质
CA2779066C (en) Moving image processing program, moving image processing device, moving image processing method, and image-capturing device provided with moving image processing device
US20110157426A1 (en) Video processing apparatus and video processing method thereof
US20240144976A1 (en) Video processing method, device, storage medium, and program product
CN109379630B (zh) 视频处理方法、装置、电子设备及存储介质
CN109120979B (zh) 视频增强控制方法、装置以及电子设备
WO2021199128A1 (ja) 画像データ転送装置、画像生成方法およびコンピュータプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19889058

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19889058

Country of ref document: EP

Kind code of ref document: A1