CN109640167B - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109640167B
CN109640167B CN201811427973.8A CN201811427973A CN109640167B CN 109640167 B CN109640167 B CN 109640167B CN 201811427973 A CN201811427973 A CN 201811427973A CN 109640167 B CN109640167 B CN 109640167B
Authority
CN
China
Prior art keywords
video
enhancement
level
image processing
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811427973.8A
Other languages
Chinese (zh)
Other versions
CN109640167A (en
Inventor
胡小朋
杨海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811427973.8A priority Critical patent/CN109640167B/en
Publication of CN109640167A publication Critical patent/CN109640167A/en
Priority to PCT/CN2019/109855 priority patent/WO2020108091A1/en
Application granted granted Critical
Publication of CN109640167B publication Critical patent/CN109640167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a video processing method and device, electronic equipment and a storage medium, and relates to the technical field of electronic equipment. Wherein, the method comprises the following steps: receiving a target level selected from a plurality of different levels corresponding to video enhancement, wherein the levels of video enhancement are different and the corresponding enhancement processing modes are different for the enhancement image quality of the video. And acquiring an enhancement processing mode corresponding to the target level. And performing enhancement processing on the video through the acquired enhancement processing mode. In the scheme, the enhancement processing modes of the videos of different levels are different, the enhancement effects on the image quality of the videos are different, the differentiation processing of the videos is realized, a good processing effect is obtained, and the super-definition visual effects of different levels are effectively realized.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of science and technology, electronic devices have become one of the most common electronic products in people's daily life. Moreover, users often watch videos or play games through electronic equipment, but the processing mode of the video data by the electronic equipment is fixed at present, so that the processing effect is not ideal, and the user experience is not good.
Disclosure of Invention
In view of the foregoing, the present application provides a video processing method, an apparatus, an electronic device and a storage medium to improve the foregoing problems.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes: receiving a target grade selected from a plurality of different grades corresponding to video enhancement, wherein the grades of the video enhancement are different, and the corresponding enhancement processing modes are different for the enhancement image quality of the video; acquiring an enhancement processing mode corresponding to a target level; and performing enhancement processing on the video through the acquired enhancement processing mode, wherein the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the system comprises a grade receiving module, a grade judging module and a grade judging module, wherein the grade receiving module is used for receiving a target grade selected from a plurality of different grades corresponding to video enhancement, the video enhancement grades are different, and the corresponding enhancement processing modes are different in the enhancement image quality of the video; the processing mode acquisition module is used for acquiring an enhanced processing mode corresponding to the target level; and the processing module is used for enhancing the video in the acquired enhancement processing mode, and the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more programs. Wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
According to the video processing method, the video processing device, the electronic equipment and the storage medium, the corresponding processing mode is determined according to the selected grade, so that the video enhancement processing modes of different grades are different, the enhancement effect on the image quality of the video is different, the video differentiation processing is realized, the good processing effect is obtained, and the ultradefinition visual effects of different grades are effectively realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic flow chart of video playing provided by an embodiment of the present application.
Fig. 2 shows a flowchart of a video processing method according to an embodiment of the present application.
FIG. 3 is a diagram of a display interface for rating selection provided by an embodiment of the present application.
Fig. 4 shows another display interface diagram of the rating selection provided by the embodiment of the present application.
FIG. 5 is a diagram of yet another display interface for rating selection provided by embodiments of the present application.
Fig. 6 shows a flowchart of a video processing method according to another embodiment of the present application.
Fig. 7 shows a corresponding relationship table provided in the embodiment of the present application.
Fig. 8 shows another correspondence table provided in the embodiment of the present application.
Fig. 9 shows another correspondence table provided in the embodiment of the present application.
Fig. 10 shows a flowchart of a video processing method according to another embodiment of the present application.
Fig. 11 shows a flowchart of a video processing method according to still another embodiment of the present application.
Fig. 12 is a functional block diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 13 shows a block diagram of an electronic device according to an embodiment of the present application.
Fig. 14 is a storage unit for storing or carrying program codes for implementing a video processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 shows a video playing process. Specifically, when the operating system acquires data to be played, the next task is to analyze audio/video data. The general video file is composed of a video stream and an audio stream, and the audio and video packaging formats of different video formats are different. The process of combining audio and video streams into a file is called muxer, whereas the process of separating audio and video streams from a media file is called demux. Playing the video file requires separating the audio stream and the video stream from the file stream, decoding the audio stream and the video stream respectively, directly rendering the decoded video frame, sending the corresponding audio to the buffer area of the audio output device for playing, and certainly, controlling the timestamps of video rendering and audio playing synchronously. And each video frame is an image of each frame corresponding to the video.
Specifically, the video decoding may include hard decoding and soft decoding, where the hardware decoding is performed by submitting a part of video data, which is originally completely processed by a Central Processing Unit (CPU), to a Graphics Processing Unit (GPU), and the GPU has a parallel operation capability much higher than that of the CPU, so that a load on the CPU can be greatly reduced, and some other programs can be run simultaneously after the CPU occupancy rate is reduced, and certainly, for a better processor, such as i 52320 or any type of AMD four-core processor, both hard decoding and soft decoding can be performed.
Specifically, as shown in fig. 1, a multimedia Framework (Media Framework) acquires a Video file to be played by a client through an API interface with the client, and delivers the Video file to a Video codec (Video decoder). The Media Framework is a multimedia Framework in an Android system, and three parts, namely MediaPlayer, mediaplayservice and stagefrigemployer, form a basic Framework of the Android multimedia. The multimedia frame part adopts a C/S structure, the MediaPlayer is used as a Client terminal of the C/S structure, the mediaplayservice and the stagefrigtheyer are used as a C/S structure Server terminal, the responsibility of playing the multimedia file is born, and the Server terminal completes the request of the Client terminal and responds through the stagefrigtheyer. The Video decoder is a super decoder that integrates the most common audio and Video decoding and playback for decoding Video data.
In the soft decoding, the CPU decodes the video through software. And hard decoding means that the video decoding task is independently completed through a special daughter card device without the aid of a CPU.
Whether hard decoding or soft decoding is performed, after the video data is decoded, the decoded video data is sent to a layer delivery module (surface flunger), and as shown in fig. 1, the hard decoded video data is sent to the surface flunger through a video driver. The surfaceflag displays the decoded video data on a display screen after rendering and synthesizing the video data. The Surface flunger is an independent Service, receives all the Surface of windows as input, calculates the position of each Surface in a final composite image according to parameters such as ZOrder, transparency, size and position, and then sends the position to HWComposer or OpenGL to generate a final display Buffer, and then displays the final display Buffer on a specific display device.
As shown in fig. 1, in the soft decoding, the CPU decodes the video data and then gives it to the surface flag rendering and compositing, and in the hard decoding, the CPU decodes the video data and then gives it to the surface flag rendering and compositing. And the SurfaceFlinger calls the GPU to render and synthesize the image, and the image is displayed on the display screen.
In order to obtain a good display effect, enhancement processing can be performed on the video enhancement, and the enhancement processing can be performed after decoding, and then rendering and synthesis are performed after the enhancement processing, and then the video enhancement is displayed on a display screen. The enhancement processing improves the image quality of the video frame by adjusting the image parameters of the video frame in the video, improves the display effect of the video, and obtains better watching experience. The image quality of the video frame can comprise parameters such as definition, sharpness, saturation, details, lens distortion, color, resolution, color gamut range and purity, the image is more suitable for the watching preference of human eyes by adjusting various parameters related to the image quality, and the watching experience of a user is better. For example, the higher the definition of the video, the smaller the noise, the clearer the details, the higher the saturation, and the like, the better the image quality of the video is represented, and the better the user viewing experience is. The parameters of different combinations in the image quality are adjusted to represent different enhancement processing modes of the video, and each enhancement processing mode comprises a corresponding image processing algorithm used for carrying out image processing on the video frame to adjust the image parameters of the video frame and improve the image quality of the video frame.
However, the inventor has found through research that there are usually only two options of yes and no for the display enhancement of the video, that is, either the enhancement is performed on the video or the video is not enhanced, and there is no differentiation enhancement set according to the actual needs of the user. That is, without enhanced selection of different effects, the user experience is poor.
The following describes in detail a video processing method, an apparatus, an electronic device, and a storage medium provided in embodiments of the present application with specific embodiments.
Referring to fig. 2, a video processing method according to an embodiment of the present application is shown. The video processing method is used for acquiring an enhancement processing mode corresponding to a target grade when the target grade selected from a plurality of different grades is received. The enhancement processing modes corresponding to different levels are different, so that a user can select the corresponding enhancement processing mode according to actual requirements, the difference of video enhancement processing is realized, and the user experience is improved. In a specific embodiment, the video processing method is applied to the video processing apparatus 500 shown in fig. 12 and the electronic device 600 (fig. 13) equipped with the video processing apparatus 500. The following will describe a specific flow of this embodiment by taking an electronic device as an example, and it is understood that the electronic device applied in this embodiment may be various devices capable of performing video processing, such as a smart phone, a tablet computer, a desktop computer, a wearable electronic device, an in-vehicle device, and a gateway, and is not limited specifically herein. Specifically, the method comprises the following steps:
step S110: a target level selected from a plurality of different levels corresponding to video enhancement is received. The video enhancement levels are different, and the corresponding enhancement processing modes are different for the enhancement image quality of the video.
The electronic device can display the video after decoding, enhancing and rendering the acquired video data. The electronic device may obtain the video data from the server, may obtain the video data locally, or may obtain the video data from other electronic devices.
Specifically, when the video data is obtained by the electronic device from the server, then the video data may be downloaded by the electronic device from the server, or obtained online by the electronic device from the server. For example, the video data may be downloaded by the electronic device through installed video playing software, or obtained online by the video playing software. The server may be a cloud server. When the video data is acquired from the local of the electronic device, the video data may be previously downloaded by the electronic device and stored in the local memory. When the video data is acquired by the electronic device from another electronic device, the video data may be transmitted to the electronic device by the other electronic device through a wireless communication protocol, for example, a WLAN protocol, a bluetooth protocol, a ZigBee protocol, a WiFi protocol, or the like, or may be transmitted to the electronic device by the other electronic device through a data network, for example, a 2G network, a 3G network, or a 4G network, and the like, which is not limited herein.
The electronic equipment acquires video data, and plays the video data through the display after the video data is decoded, rendered, synthesized and the like. And if the related control instruction of video enhancement is received, enhancing the video data and playing the video after video enhancement.
The enhancement of the video can be set in a plurality of different levels, and the enhancement processing modes of the video with different levels are different, and correspondingly, the enhancement processing modes of the video with different levels are different in the enhancement image quality of the video. In the embodiment of the present application, the number of levels to be set is not limited, and only two levels may be set, or three or more levels may be set.
The user can select one grade from a plurality of different grades to enhance the video, that is, the user selects one grade to enhance the video by the enhancement processing mode corresponding to the grade, so as to realize the image quality enhancement effect corresponding to the grade. In the embodiment of the present application, the selected level for performing the enhancement processing is defined as a target level.
As an embodiment, the target level selected from the plurality of different levels corresponding to video enhancement may be determined by the electronic device when the video is turned on. For example, if the default setting of the application program playing the video is to turn on a certain level of video enhancement, then the level is taken as the selected target level when the video is turned on. For another example, if the application program playing the video was turned on at a certain level when it was turned off last time, it is determined that the video enhancement at the certain level was received when the video in the application program was turned on again, and the certain level is taken as the target level. Or a certain level of video enhancement that the video was turned on at the previous turn-off, and then the level is taken as the selected target level when the video is turned on again.
The target level may also be a user selection determination received during video playback, as an embodiment. Taking the example that the video enhancement corresponds to two different levels, which are respectively a high level and a low level, as shown in fig. 3, a high-level video enhancement switch and a low-level video enhancement switch are provided, and correspond to the high-level video enhancement and the low-level video enhancement, respectively. In the video enhancement switch shown in fig. 3, when the on selection of the enhancement switch of the lower level is received, as shown in fig. 4, it is received that the target level selected from the plurality of different levels corresponding to the video enhancement is the lower level. As shown in fig. 5, a video enhancement switch capable of switching between a high level and a low level is provided, and if the user switches the video enhancement switch to the low level enhancement shown in fig. 5, it is received that the target level selected from a plurality of different levels corresponding to the video enhancement is the low level. In the video playing process, the set video enhancement switch can be in a hidden state. And when the click touch of the video is received, displaying the video enhancement switch and enabling the video enhancement switch to be in a controllable state. And when the video does not receive the touch operation of the user for a period of time, hiding the video enhancement switch again.
In the embodiment of the present application, the manner of selecting the target level may also include other manners. For example, the plurality of different levels include a level with low power consumption at the time of the enhancement processing, and the power consumption of the level with low power consumption is lower than the power consumption of the other levels at the time of the enhancement processing. As an embodiment, it may be determined whether the power of the electronic device is less than the target power in the case of performing the enhancement processing on the video. The enhancement processing is corresponding to a level of high power consumption. And if the electric quantity of the electronic equipment is judged to be smaller than the target electric quantity, taking the grade with low electric consumption as the selected target grade. The specific electric quantity value of the target electric quantity is not limited in the embodiment of the present application, and may be thirty percent, twenty percent, or the like of the total electric quantity of the electronic device. In addition, the target electric quantity may be set by a user and stored in the electronic device.
Whether the implementation is implemented in the electronic device may be determined by user settings. Specifically, the user can set whether to switch the enhancement processing of low power consumption at the time of low power. If the low power consumption is enhanced when the user sets the low power consumption, whether the power of the electronic equipment is smaller than the target power can be judged under the condition of enhancing the video. When the power amount of the electronic device is less than the target power amount, the level with low power consumption is used as the selected target level.
Step S120: and acquiring an enhancement processing mode corresponding to the target level.
Different levels of video enhancement correspond to different enhancement processing modes. In the case of determining the target level, the enhancement processing manner corresponding to the target level may be acquired according to the correspondence between the level and the enhancement processing manner.
Step S130: and performing enhancement processing on the video through the acquired enhancement processing mode, wherein the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
Each enhancement processing mode includes an image processing algorithm to achieve a corresponding enhancement processing effect. And when the enhancement processing mode corresponding to the target level is determined, the video is enhanced through the image processing algorithm included in the enhancement processing mode, and the image parameters of the video frame are adjusted through the image processing algorithm included in the enhancement processing mode, so that the related parameters of the image quality of the video frame are adjusted, and the image quality of the video is improved.
Because the video is composed of one frame and one frame of video frame, different video frames are different pictures, and various enhancement processing modes enhance the video, actually, the image quality of the video frame is enhanced. Specifically, the enhancement processing of the video is performed by an enhancement processing mode, and the image processing is performed on the video data corresponding to each video frame by an image processing algorithm included in the enhancement processing mode.
In the embodiment of the present application, the enhancement processing on the video may be set to a plurality of different levels. And when the situation that a certain grade is selected as a target grade is received, performing enhancement processing on the video by an enhancement processing mode corresponding to the target grade. In this embodiment, the selection of the enhancement processing mode may be various, and the level of video enhancement is selected according to the requirement, so that the differentiation of video enhancement is realized, and the user experience of video enhancement is improved.
Another embodiment of the present application provides a video processing method, which selects an enhancement processing mode corresponding to a target level according to a correspondence between the level and the enhancement processing mode. Specifically, referring to fig. 6, the method includes:
step S210: a target level selected from a plurality of different levels corresponding to video enhancement is received. The video enhancement levels are different, and the corresponding enhancement processing modes are different for the enhancement image quality of the video.
This step can be referred to as step S110, and is not described herein again.
Step S220: and searching the grade parameter corresponding to the target grade from the corresponding relation table of different grades and enhancement processing modes.
Step S230: and determining the enhancement processing mode corresponding to the searched grade parameter as the enhancement processing mode corresponding to the target grade.
In the embodiment of the application, the video enhancement of different levels has different processing effects and different video image quality. Specifically, each enhancement processing mode includes one or more image processing algorithms to achieve a corresponding processing effect.
Optionally, the different levels of video enhancement may be different levels of high and low, and the higher the level is, the better the video enhancement effect is, so that the user can select the enhancement effect or the general enhancement processing mode by selecting the different levels of high and low. Specifically, the video enhancement level may be different, and the corresponding enhancement processing method may include different image processing algorithms, so that the higher the video enhancement level is, the better the enhancement image quality of the video by the corresponding enhancement processing method is. The better the image quality, the better the video quality, the better the user viewing experience, including the higher the definition, the lower the noise, the better the detail saving, the higher the saturation, etc.
In the embodiment of the present application, it may be that the higher the level of video enhancement, the more kinds of image processing algorithms for different image processing purposes are included. For example, the video enhancement level includes a third level, a second level and a first level, which are sequentially higher, and the types of image processing algorithms sequentially included from the third level to the first level are sequentially increased. For example, commonly used image processing algorithms for enhancing the quality of video include various image processing algorithms such as an image processing algorithm for enhancing brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast, an image processing algorithm for adjusting detail, an image processing algorithm for removing block effect, an image processing algorithm for removing edge aliasing, and an image processing algorithm for removing banding. The first level of video enhancement may include an image processing algorithm to increase brightness, an image processing algorithm to adjust saturation, an image processing algorithm to adjust contrast, an image processing algorithm to adjust detail, an image processing algorithm to remove blockiness, an image processing algorithm to remove edge aliasing, and an image processing algorithm to remove banding. The second level of video enhancement includes an image processing algorithm to increase brightness, an image processing algorithm to adjust saturation, an image processing algorithm to adjust contrast, and an image processing algorithm to adjust detail. The third level of video enhancement includes deblocking image processing algorithms, edge-aliasing image processing algorithms, and de-banding image processing algorithms.
In addition, the video enhancement at different levels may be performed such that the higher the level is, the better the enhanced image quality is, because the parameter settings of the image processing algorithm are different in the corresponding enhancement processing methods. Alternatively, the higher the level of enhancement processing, the higher the accuracy of the included image processing algorithm.
In the embodiment of the present application, the enhancement processing manner corresponding to the target level may be obtained through a correspondence table between the level and the enhancement processing manner.
In particular, each level may correspond to a level parameter, i.e. different level parameters correspond to different levels, each level parameter representing a video enhancement level. The electronic device may store a corresponding relationship table of the level parameter and the enhancement processing mode, where the corresponding relationship table may be downloaded simultaneously when the electronic device downloads the video application, may be downloaded when a video enhancement plug-in is downloaded, may be downloaded or updated when a system of the electronic device is updated, may be pushed to the electronic device when the server has a new corresponding relationship table, may also be obtained by the electronic device at regular time when the electronic device requests from the server, or may be obtained by the electronic device from the server when the electronic device needs to use the corresponding relationship table, if the enhancement processing mode corresponding to the target level needs to be searched through the corresponding relationship table. How and when the correspondence table is obtained is not limited in the embodiment of the present application.
In the correspondence table, each level parameter corresponds to an enhancement processing mode, and the enhancement processing mode corresponding to the level parameter is an enhancement processing mode for performing enhancement processing on the level corresponding to the level parameter. The enhancement processing method in the correspondence table may be represented by a method parameter.
And when one grade is determined to be the target grade, correspondingly acquiring a grade parameter corresponding to the target grade. And searching the grade parameter from the corresponding relation table, and determining that the enhancement processing mode corresponding to the searched grade parameter is the enhancement processing mode corresponding to the target grade.
For example, the video enhancement level includes a first level, a second level, and a third level, and as shown in fig. 7, the three level parameters A, B, C correspond to the enhancement processing modes a, b, and c, respectively. Three ranking parameters A, B, C represent a first ranking, a second ranking, and a third ranking, respectively, where ranking parameter a represents the first ranking, ranking parameter B represents the second ranking, and ranking parameter C represents the third ranking. a. b and c represent different enhancement processing modes. If the received target grade is the first grade, the grade parameter a may be looked up from the correspondence table. When the level parameter A is found, the enhancement processing mode corresponding to A in the corresponding relation table is found to be a, so that the enhancement processing mode corresponding to the selected target level can be obtained to be a.
After the enhancement processing mode corresponding to the target level is determined, an image processing algorithm corresponding to the enhancement processing mode can be determined. Optionally, each enhancement processing mode may correspond to one mode parameter, the mode parameter corresponds to one or more algorithm identity parameters, and each algorithm identity parameter represents one image processing algorithm. After the enhancement processing mode is determined, the image processing algorithm corresponding to the enhancement processing mode can be called through the algorithm parameter corresponding to the mode parameter. Optionally, the image processing algorithm corresponding to each enhancement processing mode may also be packaged as a program module. And after the enhancement processing mode is determined, calling a program module corresponding to the enhancement processing mode.
The video with different resolutions has different characteristics. For example, for standard definition and smooth video, the resolution is low, the noise in the video is severe, the image in the video frame is blurred, the image edge is not clear, and the image has edge noise. And for high-resolution videos such as high-definition videos, the noise is low, and the images are clear. Because the characteristics of the images in the video frames are different between the video with high resolution and the video with low resolution, if the videos with different resolutions and the videos with the same level are enhanced by the same enhancement processing mode, the image quality of the processed videos may be completely different, and the enhancement effect on the videos may not be ideal. For example, sharpening can make a high-definition image clearer, but sharpening and denoising are contradictory, and in a low-resolution video, if sharpening is performed, edge noise may be amplified instead. Therefore, for videos with different resolutions, different image processing algorithms are required to achieve the same or similar video quality through enhancement, and corresponding videos with different resolutions and enhancement processing modes are required.
Optionally, in this embodiment of the application, different correspondence tables may be set corresponding to different resolutions of the video according to characteristics of the different resolutions. In different corresponding relation tables, the enhancement processing modes corresponding to the same grade parameter are different. That is, for the same level of video enhancement, if the resolution of the video is different, different enhancement processing modes can be selected.
Specifically, the electronic device may obtain the resolution of the video according to the video data. The resolution of the video is a parameter for measuring the amount of data in the video frame, and can be represented in the form of W × H, where W refers to an effective pixel in the horizontal direction of the video frame, and H refers to an effective pixel in the vertical direction of the video frame.
The manner of obtaining the resolution of the video may be that the electronic device decodes the video, and in the decoded video data, there is a data portion corresponding to the stored resolution, and the data portion may be a piece of data. Therefore, the data portion corresponding to the resolution can be acquired from the decoded data of the video, and the resolution of the video can be acquired from the data portion corresponding to the resolution.
For example, H.264, also part tenth of MPEG-4, is a highly compressed digital Video codec standard proposed by the Joint Video Team (JVT, Joint Video Team) consisting of the union of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). For the code stream coded by h.264, the stream information of the code stream includes the resolution of the video, and the stream information of the code stream is stored in a special structure called SPS (sequence Parameter set), which is the data portion corresponding to the resolution in the decoded data. According to the format information of the h.264 code stream, in the h.264 code stream, 0x000x 01 or 0x000x 000x 01 is used as the start code, so whether the start code is SPS is judged by detecting whether the last five bits of the first byte after the start code are 7 (00111). After the SPS is obtained, the resolution of the video may be resolved. There are two members in the SPS, pic _ width _ in _ mbs _ minus1 and pic _ height _ in _ map _ units _ minus _1, which respectively represent the width and height of the picture, and both are 16 units (16 × 16 blocks in area) minus1, so the actual width is (pic _ width _ in _ mbs _ minus1+1) × 16, and the height is (pic _ height _ in _ map _ units _ minus _1+1) × 16, i.e., (pic _ width _ in _ mbs _ minus1+1) (pic _ width _ in _ mbs _ minus) 16, corresponding to W in the resolution, and (pic _ height _ in _ map _ units _ minus _1+1) ("pic _ height _ in _ map _ minus) 16, corresponding to H in the resolution.
After the resolution of the video is obtained, the corresponding relation table corresponding to the resolution may be determined, and the determined corresponding relation table is used as the corresponding relation table for searching the level parameter. That is, the level parameter corresponding to the target level is searched in the corresponding relation table corresponding to the resolution of the video, and then the enhancement processing mode corresponding to the level parameter is obtained as the enhancement processing mode corresponding to the target level.
For example, the levels of video enhancement include a first level, a second level, and a third level, and the resolution includes a first resolution and a second resolution. As shown in fig. 8, the correspondence table corresponding to the first resolution includes three level parameters a1, B1, and C1 corresponding to enhancement processing modes a1, B1, and C1, respectively. As shown in fig. 9, the correspondence table corresponding to the second resolution includes three level parameters a2, B2, and C2 corresponding to enhancement processing modes a2, B2, and C2, respectively. If the received target level is the first level and the resolution of the video is the first resolution, the enhancement processing mode corresponding to the target level may be obtained as a 1; if the received target level is the first level and the resolution of the video is the second resolution, the enhancement processing mode corresponding to the target level may be obtained as a 2.
In the embodiment of the present application, in the different correspondence tables corresponding to different resolutions, an image processing algorithm included in the enhancement processing mode corresponding to each level is not limited.
For example, the first resolution may be set to represent a low resolution and the second resolution may be set to represent a high resolution, respectively. Wherein, the first resolution and the second resolution can respectively represent more than one resolution. For example, the first resolution may be the resolution corresponding to standard definition video and smooth video, such as 240p, 360p, and 480 p. Where 240p represents a minimum resolution of 480x 240; 360p indicates that the resolution is at least 640x 360; 480p indicates a minimum resolution of 720 x 480. The second resolution may be high definition video and resolution corresponding to super definition video, for example, the second resolution may be 720p, 1080p, where 720p represents that the resolution is 1280x720 at the lowest; 1080p indicates that the resolution is 1920 × 1080 or the like at the lowest.
For low-resolution video, the video is noisy and the display is blurred. Specifically, the video effective pixels with low resolution are fewer, the distance between the effective pixels is enlarged during enlargement, the electronic device fills the space between the effective pixels through interpolation, and the pixels used for interpolation are calculated according to the upper, lower, left and right effective pixels and are not real video information, so that the displayed video image has more information which is not video, and the video has larger noise. Particularly, corresponding to the edge of the image in the video frame, the interpolation pixel obtained through calculation enables the edge to generate a pixel block, namely, a block effect, namely, mosaic occurs, so that the edge is fuzzy and not clear enough, and edge noise is formed. Therefore, in the corresponding relation table corresponding to the first resolution, the image processing algorithm included in the enhancement processing mode may bias to remove the blocking effect, remove the edge saw, remove the banding, and the like.
In addition, optionally, for a video with a low resolution, edge noise in a video frame of the video is large, and therefore, in the correspondence table corresponding to the first resolution, the image processing algorithm included in the enhancement processing mode may further include an image processing algorithm for weakening details and an image processing algorithm for reducing saturation, so as to reduce the edge noise.
For example, in the correspondence table corresponding to the first resolution, the enhancement processing method corresponding to each level may include one or more image processing algorithms of deblocking, edge aliasing, and striping. If the higher the video enhancement level is, the better the image quality of the enhancement processing is, in the relationship table corresponding to the first resolution, the enhancement processing mode corresponding to the first level may include an image processing algorithm for improving brightness, an image processing algorithm for reducing saturation, an image processing algorithm for improving contrast, an image processing algorithm for weakening details, an image processing algorithm for removing a blocking effect, an image processing algorithm for removing edge jaggies, and an image processing algorithm for removing stripes; the enhancement processing mode corresponding to the second level can comprise parts of an image processing algorithm for improving brightness, an image processing algorithm for reducing saturation, an image processing algorithm for improving contrast and an image processing algorithm for weakening details, and also comprises an image processing algorithm for removing a blocking effect and an image processing algorithm for removing strips of an image processing algorithm for removing edge sawteeth; the enhancement processing mode corresponding to the third level may only include an image processing algorithm for removing the blocking effect and an image processing algorithm for removing the banding of the image processing algorithm for removing the edge aliasing.
And for a high-resolution video, the effective pixels are more, when the display screen of the electronic equipment displays, more image information of the video is displayed, the video is clear, and the noise is lower. Therefore, in the correspondence table corresponding to the second resolution, the image processing algorithm included in the enhancement processing method may be an image processing algorithm for enhancing brightness, an image processing algorithm for enhancing saturation, an image processing algorithm for enhancing contrast, an image processing algorithm for enhancing detail, or the like. For example, in the correspondence table corresponding to the second resolution, the enhancement processing method corresponding to each level may include one or more of an image processing algorithm for improving brightness, an image processing algorithm for improving saturation, an image processing algorithm for improving contrast, and an image processing algorithm for enhancing details. If the higher the video enhancement level is, the better the image quality of the enhancement processing is, in the relationship table corresponding to the second resolution, the enhancement processing mode corresponding to the first level may include an image processing algorithm for improving brightness, an image processing algorithm for improving saturation, an image processing algorithm for improving contrast, an image processing algorithm for enhancing details, an image processing algorithm for removing block effect, an image processing algorithm for removing edge jaggies, and an image processing algorithm for removing stripes; the enhancement processing mode corresponding to the second level can comprise a part of an image processing algorithm for removing a blocking effect, an image processing algorithm for removing a band of an image processing algorithm for removing edge sawteeth, an image processing algorithm for improving brightness, an image processing algorithm for improving saturation, an image processing algorithm for improving contrast and an image processing algorithm for enhancing details; the enhancement processing mode corresponding to the third level may only include part or all of an image processing algorithm for improving brightness, an image processing algorithm for improving saturation, an image processing algorithm for improving contrast, and an image processing algorithm for enhancing details.
Step S240: and performing enhancement processing on the video through the acquired enhancement processing mode.
And processing the video by using an image processing algorithm included in the enhancement processing mode corresponding to the target level to obtain an enhancement processing effect corresponding to the target level.
In the embodiment of the application, the enhancement processing mode corresponding to the target grade is determined according to the corresponding relation between the grade and the enhancement processing mode. For videos with different resolutions, under the same level, the selected processing modes are different, so that the differential processing of the videos with different resolutions is realized, and a better video processing effect is obtained.
The application also provides an embodiment, and in the embodiment, the enhancement processing modes of different levels can be set by a user. That is, the image processing algorithms corresponding to different levels can be set by the user to enhance the processed video more closely to the demand differentiation of the user. Specifically, referring to fig. 10, the method provided in the embodiment of the present application includes:
step S310: receiving an algorithm setting request, wherein the algorithm setting request comprises a grade for performing algorithm setting.
For each level of video enhancement, a setting entry can be provided for a user to set an enhancement processing mode, and specifically, an image processing algorithm corresponding to each level is set.
The user can enter the setting interface through the setting entrance to set the algorithm for each grade. Correspondingly, algorithm setting requests submitted by users corresponding to the various levels can be received. For example, the user submits an algorithm setting request for performing algorithm setting for a certain level, the algorithm setting request carrying a level parameter for the level. The level at which the user wants to perform algorithm setting can be determined based on the level parameter when an algorithm setting request is received.
Step S320: a plurality of image processing algorithms are displayed.
Displaying a plurality of image processing algorithms which can be selected by a user on a display screen. Wherein the display may be an image processing algorithm that is illustrated by the processing effect, since the user perceives the effect more clearly, rather than the name of the algorithm itself. For example, the blocking artifacts are removed by the loop deblocking filtering algorithm, but if the displayed image processing algorithm is the loop deblocking filtering, the user may not understand the processing purpose of the image processing algorithm, and thus, the display may be performed with a functional description, such as displaying the blocking artifacts or demosaicing.
The specific display processing algorithms are not limited in the embodiments of the present application, and for example, an image processing algorithm for improving brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast, an image processing algorithm for adjusting details, an image processing algorithm for removing a block effect, an image processing algorithm for removing edge aliasing, an image processing algorithm for removing a stripe, and the like may be displayed.
Wherein the displayed image processing algorithms may be the same in response to the algorithm setting requests of the respective levels. Optionally, in order to reflect the difference between different levels, algorithm setting requests of different levels may be set, and displayed image processing algorithms may also be different.
Step S330: any one or more selected from the plurality of image processing algorithms is received.
Step S340: and taking the selected image processing algorithm as an image processing algorithm included in the enhancement processing mode corresponding to the grade in the algorithm setting request.
The user may select from the displayed image processing algorithms according to actual processing requirements, one or more of which are not limiting. When the user selection is complete, the selection of the image processing algorithm may be submitted by completing, determining, etc. the key determined to be selected. When any one or more of the image processing algorithms selected by the user from the multiple image processing algorithms are received, the image processing algorithm selected by the user is used as the image processing algorithm corresponding to the grade to be set, so that the user can set the differentiation of each grade according to the preference of the user.
For example, if the user wants to set the enhancement processing mode of the first level, the first level may be selected on the setting interface. And displaying an image processing algorithm for improving brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast, an image processing algorithm for adjusting details, an image processing algorithm for removing a blocking effect, an image processing algorithm for removing edge sawteeth and an image processing algorithm for removing stripes on a setting interface of the first level. If the user confirms that the selected image processing algorithm is the image processing algorithm for adjusting the saturation and the image processing algorithm for removing the blocking effect, the image processing algorithm included in the enhancement processing mode corresponding to the first level is set as the image processing algorithm for adjusting the saturation and the image processing algorithm for removing the blocking effect.
Step S350: receiving a target level selected from a plurality of different levels corresponding to video enhancement, wherein the levels of video enhancement are different and the corresponding enhancement processing modes are different for the enhancement image quality of the video.
Step S360: and acquiring an enhancement processing mode corresponding to the target level.
Step S370: and performing enhancement processing on the video through the acquired enhancement processing mode.
When the enhancement processing mode corresponding to the target level is obtained, the enhancement processing mode corresponding to the target level set by the user is obtained, so that the enhancement effect desired by the user is realized when enhancement is performed according to the target level, and the individual enhancement preference of different users is better met.
The application also provides an embodiment, and in the embodiment, the user can select the enhancement algorithm corresponding to the specific desired enhancement effect from the image processing algorithms corresponding to the target level. Specifically, referring to fig. 11, the method provided in this embodiment includes:
step S410: receiving a target level selected from a plurality of different levels corresponding to video enhancement, wherein the levels of video enhancement are different and the corresponding enhancement processing modes are different for the enhancement image quality of the video.
Step S420: and displaying a plurality of image processing algorithms corresponding to the target grade.
A variety of image processing algorithms may be set for each level. When a target level selected from the different levels is received, an image processing algorithm corresponding to the target level may be displayed. The user may select from a variety of image processing algorithms. When specifically displayed, the display may also be performed by functional description.
Step S430: a selection of any one or more of the plurality of image processing algorithms is received.
Step S440: and taking the selected image processing algorithm as an image processing algorithm included in the enhancement processing mode corresponding to the target level.
And when the user selects and determines one or more image processing algorithms from the multiple image processing algorithms corresponding to the target grade, taking the image processing algorithm selected by the user as the image processing algorithm included in the enhancement processing mode corresponding to the target grade. That is, the image processing algorithm included in the enhancement processing mode corresponding to the target level is the image processing algorithm selected by the user corresponding to the target level.
Step S450: and performing enhancement processing on the video through the acquired enhancement processing mode.
And processing the video in an enhancement processing mode corresponding to the target level, wherein an image processing algorithm for processing the video is an image processing algorithm selected by a user from a plurality of image processing algorithms corresponding to the target level.
In the embodiment of the application, after the grade is selected, the user can select the image processing algorithm corresponding to the target grade so as to select the image processing algorithm meeting the current video processing requirement. For example, the current power of the electronic device is low, and the user may select fewer image processing algorithms to meet the basic video enhancement requirement, so as to reduce the power consumption during video enhancement.
An embodiment of the present application further provides a video processing apparatus, please refer to fig. 12, where the apparatus 500 includes: the level receiving module 510 is configured to receive a target level selected from a plurality of different levels corresponding to video enhancement, where the levels of video enhancement are different and the enhancement image quality of the video is different according to the corresponding enhancement processing manner. A processing manner obtaining module 520, configured to obtain an enhancement processing manner corresponding to the target level. The processing module 530 is configured to perform enhancement processing on the video in the acquired enhancement processing manner, where the enhancement processing improves the image quality of a video frame of the video by adjusting image parameters of the video.
Optionally, the apparatus may further include a setting module for displaying a plurality of image processing algorithms; receiving any one or more selected from the plurality of image processing algorithms; and taking the selected image processing algorithm as an image processing algorithm included in the enhancement processing mode corresponding to the grade in the algorithm setting request.
Optionally, the processing mode obtaining module 520 may include an algorithm display unit, configured to display a plurality of image processing algorithms corresponding to the target level; a selection receiving unit for receiving a selection of any one or more of the plurality of image processing algorithms; and the processing mode determining unit is used for taking the selected image processing algorithm as the image processing algorithm included in the enhancement processing mode corresponding to the target level.
Optionally, the electronic device may store a corresponding relationship table between the level parameter and the enhancement processing mode, and different level parameters correspond to different levels. The processing mode obtaining module 520 may include a parameter searching unit, configured to search the corresponding level parameter of the target level from the corresponding relationship table; and the mode determining unit is used for determining the enhancement processing mode corresponding to the searched grade parameter as the enhancement processing mode corresponding to the target grade.
Optionally, different resolutions of the video correspond to different correspondence tables, and enhancement processing modes corresponding to the same level parameter in the different correspondence tables are different. The processing mode obtaining module 520 may further include a resolution obtaining unit, configured to obtain a resolution of the video; and the relation table determining unit is used for determining the corresponding relation table corresponding to the resolution, and taking the determined corresponding relation table as the corresponding relation table for searching the grade parameter.
Optionally, in this embodiment of the application, the video enhancement levels are different, and the image processing algorithms included in the corresponding enhancement processing manners are different, so that the higher the video enhancement level is, the better the enhancement image quality of the video is achieved by the corresponding enhancement processing manner.
The first level of video enhancement may include an image processing algorithm for improving brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast, an image processing algorithm for adjusting details, an image processing algorithm for removing a blocking effect, an image processing algorithm for removing edge aliasing, and an image processing algorithm for removing banding; the second level of video enhancement comprises an image processing algorithm for improving brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast and an image processing algorithm for detail adjustment; the third level of video enhancement includes a deblocking image processing algorithm, an edge-aliasing image processing algorithm, and a striping image processing algorithm, wherein the third level, the second level, and the first level are successively higher.
It will be clear to those skilled in the art that, for convenience and brevity of description, the various method embodiments described above may be referred to one another; for the specific working processes of the above-described devices and modules, reference may be made to corresponding processes in the foregoing method embodiments, which are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 13, a block diagram of an electronic device 600 according to an embodiment of the present disclosure is shown. The electronic device 600 may be a smartphone, a tablet computer, or other electronic device capable of video processing. The electronic device includes one or more processors 610 (only one shown), memory 620, and one or more programs. Wherein the one or more programs are stored in the memory 620 and configured to be executed by the one or more processors 610. The one or more programs are configured to perform the methods described in the foregoing embodiments.
The processor 610 may include one or more processing cores. The processor 610 interfaces with various components throughout the electronic device 600 using various interfaces and circuitry to perform various functions of the electronic device 600 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 620 and invoking data stored in the memory 620. Alternatively, the processor 610 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 610 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 610, but may be implemented by a communication chip.
The Memory 620 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 620 may be used to store instructions, programs, code sets, or instruction sets. The memory 620 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function, instructions for implementing the various method embodiments described above, and the like. The data storage area can also store data (such as a phone book, audio and video data, chatting record data) and the like created by the electronic equipment in use.
In addition, the electronic device 600 may further include a display screen for displaying the video to be displayed.
Referring to fig. 14, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 700 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 700 includes a non-volatile computer-readable storage medium. The computer readable storage medium 700 has storage space for program code 710 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 710 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (8)

1. A method of video processing, the method comprising:
receiving an algorithm setting request, wherein the algorithm setting request comprises a grade for performing algorithm setting; the video enhancement has a plurality of different levels, the plurality of different levels comprise a first level, a second level and a third level, the levels of the video enhancement are different, the corresponding enhancement processing modes are different for the enhancement image quality of the video, when an algorithm setting request is received, an algorithm setting request submitted by a user corresponding to each level is received, the algorithm setting request carries the level parameters of the level, and the level which the user wants to set the algorithm is determined according to the level parameters;
displaying a plurality of image processing algorithms;
receiving any one or more selected from the plurality of image processing algorithms;
the selected image processing algorithm is used as the image processing algorithm included in the enhancement processing mode corresponding to the grade in the algorithm setting request;
receiving a target level selected from a plurality of different levels corresponding to video enhancement;
after the target level is determined, acquiring the resolution of the current video, wherein the resolution of the video comprises a first resolution and a second resolution, and the first resolution is smaller than the second resolution;
acquiring enhancement processing modes corresponding to target levels, wherein for the same determined target level, the enhancement processing modes of videos with different resolutions are different, and the method comprises the following steps: if the target level is the first level, when the resolution of the current video is the first resolution, determining that the enhancement processing mode is the first enhancement processing mode, and when the resolution of the current video is the second resolution, determining that the enhancement processing mode is the second enhancement processing mode, wherein an image processing algorithm included in the second enhancement processing mode is different from an image processing algorithm included in the first enhancement processing mode;
and performing enhancement processing on the video through the acquired enhancement processing mode, wherein the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
2. The method according to claim 1, wherein the obtaining of the enhancement processing mode corresponding to the target level comprises:
displaying a plurality of image processing algorithms corresponding to the target grade;
receiving a selection of any one or more of the plurality of image processing algorithms;
and taking the selected image processing algorithm as an image processing algorithm included in the enhancement processing mode corresponding to the target level.
3. The method according to claim 1, wherein a correspondence table of level parameters and enhancement processing methods is stored, wherein different level parameters correspond to different levels, and the obtaining of the enhancement processing method corresponding to the target level comprises:
searching the grade parameter corresponding to the target grade from the corresponding relation table, wherein the step comprises the following steps: different resolutions of the video correspond to different corresponding relation tables, the corresponding relation table corresponding to the resolution of the current video is determined, and the determined corresponding relation table is used as a corresponding relation table for searching the grade parameter; if the target grade is the first grade, searching a grade parameter corresponding to the first grade from the corresponding relation table;
and determining the enhancement processing mode corresponding to the searched grade parameter as the enhancement processing mode corresponding to the target grade.
4. The method of claim 1, wherein the video enhancement is of different levels and the corresponding enhancement processing modes are of different enhancement image qualities for the video, and the method comprises:
the video enhancement grades are different, and the corresponding enhancement processing modes comprise different image processing algorithms, so that the higher the video enhancement grade is, the better the enhancement image quality of the video is enhanced by the corresponding enhancement processing mode.
5. The method of claim 4, wherein the video enhancement has different levels and the corresponding enhancement processing modes include different image processing algorithms, and the method comprises:
the first level of video enhancement comprises an image processing algorithm for improving brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast, an image processing algorithm for detail adjustment, an image processing algorithm for removing block effect, an image processing algorithm for removing edge sawtooth and an image processing algorithm for removing stripes;
the second level of video enhancement comprises an image processing algorithm for improving brightness, an image processing algorithm for adjusting saturation, an image processing algorithm for adjusting contrast and an image processing algorithm for detail adjustment;
the third level of video enhancement includes a deblocking image processing algorithm, an edge-aliasing image processing algorithm, and a striping image processing algorithm, wherein the third level, the second level, and the first level are successively higher.
6. A video processing apparatus, characterized in that the apparatus comprises:
a setup module to:
receiving an algorithm setting request, wherein the algorithm setting request comprises a grade for performing algorithm setting; the video enhancement has a plurality of different levels, the plurality of different levels comprise a first level, a second level and a third level, the levels of the video enhancement are different, the corresponding enhancement processing modes are different for the enhancement image quality of the video, when an algorithm setting request is received, an algorithm setting request submitted by a user corresponding to each level is received, the algorithm setting request carries the level parameters of the level, and the level which the user wants to set the algorithm is determined according to the level parameters;
displaying a plurality of image processing algorithms;
receiving any one or more selected from the plurality of image processing algorithms; and
the selected image processing algorithm is used as the image processing algorithm included in the enhancement processing mode corresponding to the grade in the algorithm setting request;
a level receiving module for receiving a target level selected from a plurality of different levels corresponding to video enhancement;
a processing mode acquisition module, configured to: after the target level is determined, acquiring the resolution of the current video, wherein the resolution of the video comprises a first resolution and a second resolution, and the first resolution is smaller than the second resolution; acquiring enhancement processing modes corresponding to target levels, wherein for the same determined target level, the enhancement processing modes of videos with different resolutions are different, and the method comprises the following steps: if the target level is the first level, when the resolution of the current video is the first resolution, determining that the enhancement processing mode is the first enhancement processing mode, and when the resolution of the current video is the second resolution, determining that the enhancement processing mode is the second enhancement processing mode, wherein an image processing algorithm included in the second enhancement processing mode is different from an image processing algorithm included in the first enhancement processing mode;
and the processing module is used for enhancing the video in the acquired enhancement processing mode, and the enhancement processing improves the image quality of the video frame of the video by adjusting the image parameters of the video.
7. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-5.
8. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 5.
CN201811427973.8A 2018-11-27 2018-11-27 Video processing method and device, electronic equipment and storage medium Active CN109640167B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811427973.8A CN109640167B (en) 2018-11-27 2018-11-27 Video processing method and device, electronic equipment and storage medium
PCT/CN2019/109855 WO2020108091A1 (en) 2018-11-27 2019-10-08 Video processing method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811427973.8A CN109640167B (en) 2018-11-27 2018-11-27 Video processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109640167A CN109640167A (en) 2019-04-16
CN109640167B true CN109640167B (en) 2021-03-02

Family

ID=66069735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811427973.8A Active CN109640167B (en) 2018-11-27 2018-11-27 Video processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109640167B (en)
WO (1) WO2020108091A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109640167B (en) * 2018-11-27 2021-03-02 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN112464691A (en) * 2019-09-06 2021-03-09 北京字节跳动网络技术有限公司 Image processing method and device
CN110662115B (en) * 2019-09-30 2022-04-22 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN111954285A (en) * 2020-08-05 2020-11-17 Oppo广东移动通信有限公司 Power saving control method and device, terminal and readable storage medium
CN113507643B (en) * 2021-07-09 2023-07-07 Oppo广东移动通信有限公司 Video processing method, device, terminal and storage medium
JP2023105766A (en) * 2022-01-19 2023-07-31 キヤノン株式会社 Image processing device, image processing method, and program
CN114501139A (en) * 2022-03-31 2022-05-13 深圳思谋信息科技有限公司 Video processing method and device, computer equipment and storage medium
CN114827723B (en) * 2022-04-25 2024-04-09 阿里巴巴(中国)有限公司 Video processing method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105592322A (en) * 2014-09-19 2016-05-18 青岛海尔电子有限公司 Method and device for optimizing media data
CN107659828A (en) * 2017-10-30 2018-02-02 广东欧珀移动通信有限公司 Video image quality adjustment method, device, terminal device and storage medium
CN108391139A (en) * 2018-01-15 2018-08-10 上海掌门科技有限公司 A kind of video enhancement method, medium and equipment in net cast

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7813425B2 (en) * 2006-11-29 2010-10-12 Ipera Technology, Inc. System and method for processing videos and images to a determined quality level
US20100259683A1 (en) * 2009-04-08 2010-10-14 Nokia Corporation Method, Apparatus, and Computer Program Product for Vector Video Retargeting
CN102724467B (en) * 2012-05-18 2016-06-29 中兴通讯股份有限公司 Promote method and the terminal unit of video frequency output definition
KR101974367B1 (en) * 2012-09-25 2019-05-02 삼성전자주식회사 Apparatus and method for video decoding for video quality enhancement of moving picture
KR20160103012A (en) * 2014-01-03 2016-08-31 톰슨 라이센싱 Method, apparatus, and computer program product for optimising the upscaling to ultrahigh definition resolution when rendering video content
CN104202604B (en) * 2014-08-14 2017-09-22 深圳市腾讯计算机系统有限公司 The method and apparatus of video source modeling
CN107277301B (en) * 2016-04-06 2019-11-29 杭州海康威视数字技术股份有限公司 The image analysis method and its system of monitor video
CN108810649B (en) * 2018-07-12 2021-12-21 深圳创维-Rgb电子有限公司 Image quality adjusting method, intelligent television and storage medium
CN109640167B (en) * 2018-11-27 2021-03-02 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105592322A (en) * 2014-09-19 2016-05-18 青岛海尔电子有限公司 Method and device for optimizing media data
CN107659828A (en) * 2017-10-30 2018-02-02 广东欧珀移动通信有限公司 Video image quality adjustment method, device, terminal device and storage medium
CN108391139A (en) * 2018-01-15 2018-08-10 上海掌门科技有限公司 A kind of video enhancement method, medium and equipment in net cast

Also Published As

Publication number Publication date
WO2020108091A1 (en) 2020-06-04
CN109640167A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109640167B (en) Video processing method and device, electronic equipment and storage medium
CN109983757B (en) View dependent operations during panoramic video playback
CN109729405B (en) Video processing method and device, electronic equipment and storage medium
US9992445B1 (en) Systems and methods for identifying a video aspect-ratio frame attribute
CN109660821B (en) Video processing method and device, electronic equipment and storage medium
CN109983500B (en) Flat panel projection of reprojected panoramic video pictures for rendering by an application
CN112204993B (en) Adaptive panoramic video streaming using overlapping partitioned segments
US20200351442A1 (en) Adaptive panoramic video streaming using composite pictures
US10720091B2 (en) Content mastering with an energy-preserving bloom operator during playback of high dynamic range video
CN109168065B (en) Video enhancement method and device, electronic equipment and storage medium
US20170013261A1 (en) Method and device for coding image, and method and device for decoding image
US8218622B2 (en) System and method for processing videos and images to a determined quality level
CN110868625A (en) Video playing method and device, electronic equipment and storage medium
WO2011014637A1 (en) System and method of compressing video content
WO2010100089A1 (en) Method and device for displaying a sequence of pictures
US9161030B1 (en) Graphics overlay system for multiple displays using compressed video
US9053752B1 (en) Architecture for multiple graphics planes
CN109587561B (en) Video processing method and device, electronic equipment and storage medium
CN109587555B (en) Video processing method and device, electronic equipment and storage medium
US8483389B1 (en) Graphics overlay system for multiple displays using compressed video
CN109379630B (en) Video processing method and device, electronic equipment and storage medium
CN109120979B (en) Video enhancement control method and device and electronic equipment
TW200939785A (en) Method and system for motion compensated picture rate up-conversion using information extracted from a compressed video stream
CN114827620A (en) Image processing method, apparatus, device and medium
US8526506B1 (en) System and method for transcoding with quality enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant