CN113507643A - Video processing method, device, terminal and storage medium - Google Patents

Video processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN113507643A
CN113507643A CN202110778835.XA CN202110778835A CN113507643A CN 113507643 A CN113507643 A CN 113507643A CN 202110778835 A CN202110778835 A CN 202110778835A CN 113507643 A CN113507643 A CN 113507643A
Authority
CN
China
Prior art keywords
resolution
sharpness
target
video frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110778835.XA
Other languages
Chinese (zh)
Other versions
CN113507643B (en
Inventor
胡杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110778835.XA priority Critical patent/CN113507643B/en
Publication of CN113507643A publication Critical patent/CN113507643A/en
Application granted granted Critical
Publication of CN113507643B publication Critical patent/CN113507643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • G06T5/73
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The embodiment of the application discloses a video processing method, a video processing device, a terminal and a storage medium, and belongs to the technical field of video processing. The method comprises the following steps: in response to a video processing instruction, determining the sharpness of a target corresponding to an original video frame based on the resolution of the original video frame; sharpening the original video frame based on the sharpness of the target to obtain a target video frame; and performing video processing on the target video frame based on the video processing mode indicated by the video processing instruction. The method and the device have the advantages that the problems of whitening, mosaic and the like caused by overhigh sharpness are avoided while the definition of the original video frame is improved, or the problem of poor definition improving effect caused by overlow sharpness is avoided, and the video playing effect is intelligently adjusted.

Description

Video processing method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a video processing method, a video processing device, a video processing terminal and a storage medium.
Background
With the development of display technology, people have higher and higher requirements on display effects, for example, in a video playing scene, people want the video displayed by a display screen to be clear and beautiful and to be pleasing to the eyes.
In the related art, in order to improve the video display effect, before the video display, the display image may be subjected to image processing in dimensions such as color, contrast, and definition, so as to improve the quality of the final display video. For example, adjustment parameters such as a fixed contrast, a fixed sharpness, and a fixed saturation are set, and after a layer is synthesized, image processing is performed on the layer according to the parameters, and then display is performed based on the processed layer.
However, due to the differences between the images, if the same image processing parameters are used, it is obvious that the processing effect of some images is poor, and the video playing quality is reduced.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, a video processing terminal and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a video processing method, where the method includes:
in response to a video processing instruction, determining the sharpness of a target corresponding to an original video frame based on the resolution of the original video frame;
sharpening the original video frame based on the sharpness of the target to obtain a target video frame;
and performing video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
In another aspect, an embodiment of the present application provides a video processing apparatus, where the apparatus includes:
the determining module is used for responding to a video processing instruction and determining the sharpness of a target corresponding to an original video frame based on the resolution of the original video frame;
the sharpening processing module is used for sharpening the original video frame based on the target sharpness to obtain a target video frame;
and the video processing module is used for carrying out video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one program, and the at least one program is loaded and executed by the processor to implement the video processing method according to the above aspect.
In another aspect, the present application provides a computer-readable storage medium, which stores at least one instruction for execution by a processor to implement the video processing method according to the above aspect.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the video processing method provided in the above-mentioned alternative implementation.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in a video processing scene, the resolution corresponding to an original video frame is obtained, and the original video frame is sharpened according to the target sharpness corresponding to the resolution, so that the display effect of the original video frame is improved; the sharpness of the target adopted for sharpening the original video is determined by the resolution corresponding to the original video frame, so that the problems of whitening, mosaic and the like caused by overhigh sharpness and poor definition improving effect caused by overlow sharpness are avoided while the definition of the original video frame is improved, and the video playing effect is intelligently adjusted.
Drawings
FIG. 1 illustrates a flow chart of a video processing method shown in an exemplary embodiment of the present application;
FIG. 2 illustrates a flow chart of a video processing method according to another exemplary embodiment of the present application;
FIG. 3 is a diagram illustrating a sharpening process for an original video frame according to an exemplary embodiment of the present application;
FIG. 4 illustrates a schematic view of a video display process shown in an exemplary embodiment of the present application;
FIG. 5 illustrates a video encoding diagram according to an exemplary embodiment of the present application;
FIG. 6 shows a flow diagram of a video processing method shown in another example embodiment of the present application;
FIG. 7 is a process diagram illustrating a video processing method according to an exemplary embodiment of the present application;
fig. 8 is a block diagram illustrating a video processing apparatus according to an exemplary embodiment of the present application;
fig. 9 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, a flow chart of a video processing method according to an exemplary embodiment of the present application is shown. The embodiment takes the application of the method to the terminal as an example for explanation, and the method includes:
step 101, in response to a video processing instruction, determining the sharpness of an object corresponding to an original video frame based on the resolution of the original video frame.
In a video processing scene, in order to improve a video display effect, an original video frame is often required to be post-processed, then video display is performed on the basis of the video frame obtained through post-processing, in the post-processing process, post-processing is generally performed from three dimensions of saturation, contrast and definition, wherein the definition is often required to be improved by sharpening the original video frame; because the current video sources include video sources with various resolutions, such as 8K, 4K, 1080P, 720P, 540P and other video sources with different resolutions, the difference of the resolutions directly affects the display definition of the video, that is, in the video post-processing process, the resolution of the video itself also affects the post-processing effect of the video, for example, if the resolution of the video itself is low, the sharpening processing is performed by using a high sharpness parameter, which may not achieve the effect of improving the definition of the video, but may cause the white edges of the image, and the mosaic block shape occurs; therefore, in order to improve the video definition more intelligently and appropriately, in one possible implementation, a resolution dimension is introduced in the video post-processing process so as to provide a reference for video post-processing from the resolution dimension.
It should be noted that the video processing method in this embodiment may be applied before video display is performed at a decoding end, that is, after a terminal receives a video stream pushed by another terminal or device, the video stream is decoded to generate an original video frame for display, and then the video processing method shown in this application is performed on the original video frame, where correspondingly, the original video frame is a video frame obtained by decoding, and a video processing instruction is a video display instruction; optionally, the video processing method in this embodiment of the present application may also be applied before the video stream is sent at the encoding end, that is, after the terminal acquires the original video frame, the video processing method shown in this application is executed on the original video frame, and then the processed video frame is encoded to obtain the video stream, which is transmitted to other terminals.
Optionally, in addition to processing the video processing scene, the video processing method may also be used to process any image, so that the image display effect is improved in the image capturing scene.
In a possible implementation manner, after the terminal acquires an original video frame and receives a video processing instruction, the resolution corresponding to the original video frame is determined, and then the sharpness of a target for sharpening the original video frame is determined according to the relationship between the resolution and the sharpness, so that the influence of the resolution can be considered in the sharpening process, and the video processing effect is further improved.
Optionally, the terminal stores a corresponding relationship between the resolution and the sharpness in advance, so that when the resolution corresponding to the original video frame is determined, the target sharpness corresponding to the original video frame is determined based on the resolution and the corresponding relationship.
Optionally, the correspondence between the resolution and the sharpness may be obtained by a developer through a sharpening process experiment, that is, by comparing images with the same resolution, the quality of the image display effect after sharpening is performed with different sharpness, and the sharpness with the best display effect is determined as the sharpness corresponding to the resolution, so as to sharpen the original video frame corresponding to the resolution based on the value in the application process.
And 102, sharpening the original video frame based on the sharpness of the target to obtain the target video frame.
The sharpening process is mainly characterized in that high-contrast lines 'isolation zones' between black and white are added on two sides of the edge of a video frame (image), so that the edge looks more prominent and sharp, and if the sharpness is too high, the edge can be obviously white and black; therefore, in order to avoid the situation that the processing effect is poor due to the excessively high sharpness, in a possible implementation manner, a proper target sharpness is selected based on the resolution corresponding to the original video frame, and then the original video frame is sharpened according to the target sharpness to obtain a target video frame, which is used for improving the sharpness of the original video frame.
And 103, performing video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
In one possible implementation, after the sharpening process is completed on the original video frame, the subsequent video processing may be performed on the target video frame based on the video processing instruction. Schematically, if the video processing instruction is a video display instruction, correspondingly, the target video frame can be transmitted to the display screen based on the video display instruction, so that the display screen can display the target video frame, and the video definition effect can be improved because the original video is sharpened before the video display; optionally, if the video processing instruction is a video encoding instruction, correspondingly, the target video frame may be encoded based on the video encoding instruction, and after a transmittable video stream is generated, the video stream is transmitted to other devices or terminals, so as to indirectly improve the video display effect displayed by the other terminals or devices.
Optionally, the video processing instruction may also be a video sending instruction, and correspondingly, after the video sending instruction is received, the acquired original video is sharpened to obtain a target video frame, and then the target video frame is sent to other terminals or devices.
Optionally, in order to improve the video display effect, not only the original video frame needs to be sharpened, but also the original video frame needs to be subjected to dimensionality processing such as color and contrast, and therefore, in other possible embodiments, after the original video frame is subjected to contrast adjustment, sharpness adjustment, and saturation adjustment, the video processing mode indicated by the video processing instruction may be executed. Schematically, after contrast adjustment, sharpness adjustment and saturation adjustment are performed on an original video frame, a target video frame is obtained, and then the target video frame is displayed, so that the image quality of a played video is improved in three dimensions of color, contrast and definition.
In summary, in the embodiment of the present application, in a video processing scene, by acquiring a resolution corresponding to an original video frame, sharpening the original video frame according to a target sharpness corresponding to the resolution, so as to improve a display effect of the original video frame; the sharpness of the target adopted for sharpening the original video is determined by the resolution corresponding to the original video frame, so that the problems of whitening, mosaic and the like caused by overhigh sharpness and poor definition improving effect caused by overlow sharpness are avoided while the definition of the original video frame is improved, and the video playing effect is intelligently adjusted.
In the sharpening process, the sharpening effect is not only related to the resolution of the original video frame, but also possibly related to the image content contained in the original video frame, and illustratively, if two video frames with the same resolution are provided, the first frame contains a human face, the second frame contains a grass field, the texture of the original video frame is clearer by enhancing the sharpness, namely, the original video frames of two frames are processed by adopting the same strong sharpness, the grassland texture is clearer after the original video frames containing the grassland are processed by the sharpness, for the original video frame containing the human face, the human face will have obvious wrinkles after being processed by stronger sharpness, which may not be in accordance with the video adjustment effect expected by the user, therefore, in the sharpening process, the influence of the resolution and the image content on the sharpness parameter used in the sharpening process needs to be considered at the same time.
Referring to fig. 2, a flow chart of a video processing method according to another exemplary embodiment of the present application is shown. The embodiment takes the application of the method to the terminal as an example for explanation, and the method includes:
step 201, in response to a video processing instruction, performing image recognition on an original video frame, and determining image content corresponding to the original video frame.
In order to accurately judge the image content contained in the original video frame so as to determine which sharpness parameter to use based on the image content, in a possible implementation manner, after the terminal receives a video processing instruction, the terminal performs image recognition on the original video frame so as to identify the image content contained in the original video frame, and further determines which sharpness parameter to use according to the image content.
Optionally, in a video processing scene, image contents included in original video frames of multiple consecutive frames are generally similar, and therefore, in order to reduce power consumption of the terminal, in one possible implementation, image recognition may be performed at every preset time or every preset frame number to determine the image contents included in the original video frames, illustratively, the preset time may be 1s, and the preset frame number may be 10 frames.
Optionally, the image content included in the original video frame may be identified by using an image identification model, that is, the original video frame is input into the image identification model, and the original video frame output by the image identification model includes the preset image contentAnd determining the image content contained in the original video frame according to the prediction probability. Illustratively, the preset image content may include 5 types: (1) sky, (2) portrait, (3) grassland, (4) building, (5) food and the like, and the original video frame is input into the image recognition model, and the output prediction probability of the image recognition model is obtained as follows: p1=0.2,P2=0.05,P3=0.04,P4=0.7,P5Correspondingly, the image content contained in the original video frame can be determined as a building based on the prediction probability.
Optionally, the image content corresponding to the original video frame may include a single image content, or may include at least two image contents, and illustratively, if the prediction probability is: p1=0.45,P2=0.05,P3=0.04,P4=0.45,P5Corresponding to 0.01, the original video frame includes the following image contents: sky and grass.
Step 202, based on the image content and the resolution, determining the sharpness of the target corresponding to the original video frame, wherein different image contents correspond to different sharpness.
In order to enable the sharpening processing effect to meet the requirement of the image content contained in the original video frame and also meet the resolution of the original video frame, that is, the sharpness needs to be determined by the two factors of the image content and the resolution, in a possible implementation manner, the sharpness and the resolution and the corresponding relationship between the sharpness and the image content need to be set in a terminal in a distinguishing manner, so that in the sharpening processing process, the target sharpness corresponding to the original video frame can be determined based on the image content and the resolution corresponding to the original video frame.
Optionally, similar to determining the corresponding relationship between the resolution and the sharpness, when determining the relationship between the image content and the sharpness, the sharpness may also be obtained by performing a sharpening experiment on different image contents by a developer, so as to obtain a suitable sharpness parameter suitable for the image content.
Optionally, when storing the corresponding relationship between the sharpness and the resolution and between the sharpness and the image content, multiple sets of parameters may be divided according to different resolution ranges, and then each set of parameters is divided according to different image contents, that is, each set of parameters is the sharpness corresponding to different image contents in the same resolution range; optionally, a plurality of sets of parameters may be divided according to different image contents, and then each set of parameters is divided according to different resolution ranges, that is, each set of parameters is sharpness corresponding to different resolution ranges in the same image content.
Illustratively, the correspondence between sharpness and resolution and image content may be as shown in table one (resolution first, image content second) and table two (image content first, resolution second).
Watch 1
Figure BDA0003156868500000071
Watch two
Figure BDA0003156868500000072
Figure BDA0003156868500000081
Optionally, when the corresponding relationship between the resolution and the sharpness is set, different sharpness may be set for different resolutions, or different sharpness may be set for different resolution ranges.
Based on the correspondence between the resolution, the image content and the sharpness shown in the above first table and second table, in order to facilitate searching for the required target sharpness value from the correspondence table during the application process, the parameter may also be determined based on the storage form, for example, for the first table, the target group parameter corresponding to the resolution may be selected from the multiple groups of sharpness based on the resolution, and then the target sharpness corresponding to the image content may be selected from the target group parameter based on the image content, in an exemplary example, the step 202 may include the step 202A and the step 202B.
In step 202A, at least one first sharpness corresponding to the resolution is determined based on the resolution.
Based on the storage form of the table one, in order to facilitate searching the target sharpness corresponding to the resolution and the image content from the relationship table, in a possible implementation, the correspondence table may be traversed according to the resolution, and at least one first sharpness corresponding to the resolution (the first sharpness corresponds to the same resolution) may be determined from the correspondence table.
Illustratively, if the resolution corresponding to the original video frame is 1080P, and the image content is a portrait, a group of first sharpness corresponding to no less than 1080P may be determined from table one according to 1080P: grass (sharpness 1), sky (sharpness 2), portrait (sharpness 3), food (sharpness 4), building (sharpness 5).
Step 202B, determining the sharpness of the object from the first sharpness based on the image content.
And after at least one first sharpness corresponding to the resolution is determined, determining a target sharpness matched with the image content from the plurality of first sharpness based on the image content.
Illustratively, if the at least one first sharpness is: grass (sharpness 1), sky (sharpness 2), portrait (sharpness 3), food (sharpness 4), building (sharpness 5); and then determining the sharpness of the target from the first sharpness according to the image content of the portrait as follows: sharpness 3.
Optionally, if the original video frame includes at least two image contents, corresponding to the at least two image contents, at least two target sharpness may be determined, for this case, the target sharpness may be selected according to a content priority, or different target sharpness may be adopted for different image contents, and in an exemplary example, step 202B may further include the following steps.
Firstly, determining a first sharpness corresponding to the image content with the highest content priority as a target sharpness.
The content priority can be preset by a developer, or can be set by a user, or can be dynamically adjusted by the terminal based on the user habit, for example, the user can know that the user pays more attention to the portrait display effect based on the user habit, and correspondingly, the priority of the image content of the portrait is set as the highest priority; or dynamically adjusting based on the video type, for example, if the video is a food video, correspondingly setting the food as the image content with the highest priority; and if the video is the living emotion video, correspondingly setting the portrait as the image content with the highest priority. The content priority determination method in the embodiments of the present application is not limited.
In a possible implementation manner, by comparing the content priorities of at least two image contents, the first sharpness corresponding to the image content with the highest image content priority is determined as the target sharpness, that is, the video display effect of the image content with the highest image content priority is preferentially ensured.
Illustratively, if the priority order of the image contents is: the image > food > building > sky > grassland, and the image content contained in the original video frame is food and building, and correspondingly, the sharpness corresponding to the food is determined as the target sharpness.
And secondly, determining at least two target sharpness from the first sharpness based on at least two image contents, wherein different image contents are sharpened by adopting different target sharpness.
In order to enable various image contents in the original video frame to achieve a better display effect, in other possible embodiments, the target sharpness corresponding to the various image contents may also be determined based on at least two image contents, and then different target sharpness is adopted for sharpening different image contents.
Illustratively, if an original video frame contains two image contents, namely a food image and a building image, and the resolution corresponding to the original video frame is 1080P, it is correspondingly determined that the sharpness of a target corresponding to the food image is 4, and the sharpness corresponding to the building image is 5, the region where the food image is located in the original video frame is sharpened according to the sharpness 4, and the region where the building image is located in the original video frame is processed according to the sharpness 5.
Fig. 3 is a schematic diagram illustrating a sharpening process performed on an original video frame according to an exemplary embodiment of the present application. The original video frame 301 contains image contents of both a portrait and a building, and determining the target sharpness based on the image contents and the resolution (1080P) includes: sharpness 3 (portrait) and sharpness 5 (building), and correspondingly, an image area 302 in the original video frame 301 is sharpened with sharpness 3, and an image area 303 is sharpened with sharpness 5.
In another possible embodiment, based on the storage form of the correspondence between the resolution, the image content and the sharpness in table two, the target group parameter corresponding to the image content may be selected from multiple groups of sharpness first based on the image content, and then the target sharpness corresponding to the resolution is selected from the target group parameter based on the resolution, in another exemplary example, step 202 may further include step 202C and step 202D.
Step 202C, determining at least one second sharpness corresponding to the image content based on the image content.
Based on the storage form of the second table, in order to facilitate searching the target sharpness corresponding to the resolution and the image content from the relationship table, in a possible implementation manner, the correspondence table may be traversed according to the image content, and at least one second sharpness corresponding to the image content (the second sharpness corresponds to the same image content) may be determined from the correspondence table.
Illustratively, if the image content corresponding to the original video frame is a portrait and the resolution is 720P, at least one second sharpness corresponding to the image content may be determined from the second table according to the portrait: the sharpness is more than or equal to 1080P (sharpness 3), 720P-1080P (sharpness 8), 540P-720P (sharpness 13) and <540P (sharpness 18).
And step 202D, determining the sharpness of the target from the second sharpness based on the resolution.
And after at least one second sharpness corresponding to the image content is determined, determining a target sharpness matched with the resolution from the plurality of second sharpness based on the resolution.
Illustratively, if the at least one second sharpness: the original video frame resolution is 720P, the 720P-1080P (sharpness 3), the 540P-720P (sharpness 13) and the <540P (sharpness 18), the resolution of the original video frame is 720P, the 720P belongs to the resolution range of 720-1080P, and correspondingly, the sharpness 8 is determined as the target sharpness.
Optionally, if the original video frame includes at least two image contents, correspondingly, the image content with the highest content priority may be determined first, and then the target sharpness is determined based on the image content and the resolution, which is schematically illustrated in the figure.
Optionally, different target sharpness may be adopted for processing different image contents, illustratively, if an original video frame includes two kinds of image contents of a portrait and a sky, two groups of second sharpness are respectively determined according to the portrait and the sky, then two target sharpness are determined based on resolution, and further sharpening processing is performed by adopting the target sharpness based on the different image contents.
It should be noted that, in the process that the terminal determines the sharpness of the target based on the image content and the resolution, the method of step 202A and step 202B may be executed, and the selection is performed based on the resolution first, and then based on the image content; the methods of step 202C and step 202D may also be performed, and the selection is performed first based on the image content and then based on the resolution.
And 203, sharpening the original video frame based on the sharpness of the target to obtain the target video frame.
The implementation manner of step 203 may refer to the above embodiments, which are not described herein.
And step 204, performing video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
The video processing instruction can be divided into a video display instruction and a video coding instruction, wherein if the video processing instruction is the video display instruction, the video decoding end (the device where the video decoding end is located) can be used for executing the video processing method; if the video command is a video encoding command, the video processing method may be executed by a video encoding terminal (a device where the video encoding terminal is located). In one illustrative example, step 204 may include step 204A and step 204B.
And step 204A, displaying the target video frame based on the video display instruction.
In a possible application scene, after an original video frame is obtained by decoding at a video decoding end, before the original video frame is displayed, the original video frame is sharpened by a display component so as to improve the video display effect.
Schematically, as shown in fig. 4, it shows a schematic view of a video display process according to an exemplary embodiment of the present application. In a video playing scene, after the terminal 400 receives a video stream pushed by another device or terminal, the video decoder 401 performs video decoding on the video stream to obtain each frame of original video frame, determines a resolution corresponding to each frame of original video frame, transmits the original video frame and the resolution to the display hardware 402, determines a target sharpness based on the resolution by the display hardware 402, sharpens the original video frame to obtain a target video frame, and transmits the target video frame to the display screen 403 for video display to improve the video playing image quality of the terminal.
And step 204B, carrying out video coding on the target video frame based on the video coding instruction.
In another possible application scenario, some decoding ends may not have the function of sharpening the original video frame, or in order to improve the video display efficiency of the decoding ends, the original video frame may be sharpened at a video encoding end, and the sharpened target video frame may be encoded and transmitted to the decoding end, so that the decoding end may directly perform video display after decoding.
Schematically, as shown in fig. 5, a video encoding diagram according to an exemplary embodiment of the present application is shown. In a video playing scene, the display hardware 511 in the terminal 510 determines the target sharpness according to the resolution, sharpens the original video frame based on the target sharpness to obtain a target video frame, transmits the target video frame to the video encoder 512, performs video encoding on the target video frame by the video encoder 512 to generate a video stream, and pushes the video stream to the terminal 520, and after the terminal 520 receives the video stream, performs video decoding on the video stream by the video decoder 521 to obtain a target video frame, and transmits the target video frame to the display screen 523 by the display hardware 522 for video display, so that the video playing image quality can be improved.
It should be noted that, when the video sharpening process is performed at the video encoding end, it is necessary to ensure that the resolutions of the video encoding end and the video decoding end are consistent.
In the embodiment, the target sharpness corresponding to the original video frame is determined through the resolution and the image content corresponding to the original video frame, so that the sharpening processing effect of the original video frame meets the requirements of the resolution and the image content, and the video playing image quality is further improved; in addition, after video decoding is carried out at a video decoding end, sharpening processing is carried out on an original video frame, so that the video playing image quality can be improved; by sharpening the original video frame before the video encoding end encodes, the effect of improving the video playing image quality can be realized under the condition that the video decoding end does not have the sharpening processing function, and the video display efficiency can be improved.
In a possible implementation manner, a developer stores, in advance, correspondence between different resolution ranges and sharpness in a terminal, so that during the sharpening process performed by the terminal, a target sharpness that should be used for sharpening an original video frame can be determined according to a resolution range in which the resolution of the original video frame is located.
Referring to fig. 6, a flow chart of a video processing method according to another exemplary embodiment of the present application is shown. The embodiment takes the application of the method to the terminal as an example for explanation, and the method includes:
step 601, storing the candidate resolution range and the candidate sharpness corresponding to the candidate resolution range in a corresponding relation table in an associated manner.
In a possible implementation manner, different resolution combinations which can achieve the same or similar sharpening processing effect by adopting the same sharpness are combined to form a candidate resolution range, and the candidate resolution range and the corresponding candidate sharpness are stored in an associated manner to generate a corresponding relation table.
Optionally, the correspondence table may be updated as the resolution types increase, so that when a new resolution appears, a suitable target sharpness may be selected based on the correspondence table.
Illustratively, the correspondence table between the candidate resolution ranges and the corresponding candidate sharpness may be as shown in table three.
Watch III
Figure BDA0003156868500000121
Figure BDA0003156868500000131
As can be seen from table three, if the resolution of the original video frame is high, then the sharpening process can be properly performed with strong sharpness; if the resolution of the original video frame is low, the sharpness used in the sharpening process should be properly reduced, that is, the sharpness and the resolution have a positive correlation.
Step 602, in response to the video processing instruction, determining a target resolution range corresponding to the resolution based on the correspondence table and the resolution.
In a possible implementation manner, after receiving the video processing instruction, the mapping table may be traversed based on the resolution corresponding to the original video frame, a candidate resolution range where the resolution is located in the mapping table is determined, the candidate resolution range is determined as a target resolution range, and then the candidate sharpness corresponding to the target resolution range is determined as the target sharpness.
Optionally, if a candidate resolution range including a resolution exists in the correspondence table, the sharpness may be determined directly according to the candidate resolution range, and if a candidate resolution range including the resolution does not exist in the correspondence table, the sharpness of the target needs to be estimated based on the existing candidate resolution range and the candidate sharpness corresponding to the existing candidate resolution range. In an illustrative example, step 602 may include step 602A, or step 602 may include step 602B and step 602C.
Step 602A, in response to the candidate resolution range containing the resolution existing in the correspondence table, determines the candidate resolution range as the target resolution range.
In a possible implementation manner, if a candidate resolution range including the resolution exists in the correspondence table, the candidate resolution range may be directly determined as a target resolution range, and further determined as a target sharpness according to a candidate sharpness corresponding to the target resolution range.
Illustratively, if the resolution is 1080P, the candidate resolution range corresponding to 1080P is 1080P to 4K, and correspondingly, the candidate resolution range 1080P to 4K is determined as the target resolution range.
Step 602B, in response to that no candidate resolution range containing resolution exists in the correspondence table, obtains the maximum resolution corresponding to each candidate resolution range.
If there is no candidate resolution range containing resolution in the correspondence table, in order to perform sharpening processing on the original video frame, it is necessary to determine a candidate resolution range closer to the resolution from the correspondence table, and further determine the candidate resolution range as the target resolution range. In order to determine which candidate resolution range is closer to the resolution, correspondingly, the maximum resolution corresponding to each candidate resolution range may be obtained first, and then the determination may be performed based on the resolution difference between the maximum resolution and the resolution.
Illustratively, if the candidate resolution ranges included in the correspondence table are: 1080P-4K, 720P-1080P, 540P-720P and <540P, wherein the maximum resolution respectively is as follows: 4K, 1080P, 720P and 540P.
Step 602C, in response to that the maximum resolution is smaller than the resolution and the resolution difference between the maximum resolution and the resolution is minimum, determining the candidate resolution range corresponding to the maximum resolution as the target resolution range.
Because the resolution is higher and the sharpness is higher in relation between the resolution and the sharpness, it is necessary to ensure that the maximum resolution is smaller than the resolution and the resolution difference between the maximum resolution and the resolution is minimum in the process of screening the candidate resolutions based on the maximum resolution. That is, when the maximum resolution corresponding to the candidate resolution range is smaller than the resolution and the resolution difference between the maximum resolution and the resolution is the minimum, the candidate resolution range is determined as the target resolution range.
Illustratively, if the resolution is 8K, based on the relationship between the resolution and each maximum resolution, 1080P to 4K may be determined as a target resolution range, and then the candidate sharpness corresponding to the target resolution range may be determined as the target sharpness.
Step 603, determining the candidate sharpness corresponding to the target resolution range as the target sharpness.
After the target resolution range corresponding to the resolution is determined, the target sharpness corresponding to the original video frame can be determined based on the relation between the target resolution range and the candidate sharpness.
And step 604, sharpening the original video frame based on the sharpness of the target to obtain the target video frame.
Step 605, performing video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
The implementation manner of step 604 and step 605 may refer to the above embodiments, which are not described herein.
In the embodiment, the corresponding relation between the candidate resolution range and the candidate sharpness is stored in advance, so that the target sharpness corresponding to the resolution can be determined based on the target resolution range corresponding to the resolution in the video processing process; in addition, aiming at the situation that a candidate resolution range containing the resolution does not exist in the corresponding relation table, candidate sharpness corresponding to a similar candidate resolution range can be adopted to sharpen the original video frame, and the video processing effect is further improved.
In an exemplary example, please refer to fig. 7, which shows a process diagram of a video processing method according to an exemplary embodiment of the present application. For an original video frame containing the same image content, different sharpness degrees are set for different resolution degrees (image resolution degrees) so as to ensure that the sharpness of video sources with different resolution degrees can be properly improved, schematically, when the original video frame corresponding to the image resolution degree 1 is obtained from an encoding/decoding end, and when display effect processing is carried out (executed by an effect processing module), the sharpness degree 1 is used for carrying out sharpening processing, and after contrast degree, sharpness degree 1 and saturation degree processing, a target video frame 1 is obtained; if the original video frame corresponds to the image resolution 2, sharpening by using the sharpness 2 when performing display effect processing, and obtaining a target video frame 2 after processing the contrast, the sharpness 2 and the saturation; and analogizing in sequence to obtain a target video frame 3 and a target video frame 4 corresponding to the image resolution of 3 and the image resolution of 4 respectively.
Referring to fig. 8, a block diagram of a video processing apparatus according to an exemplary embodiment of the present application is shown. The apparatus may be implemented as all or a portion of the terminal in software, hardware, or a combination of both. The device includes:
a determining module 801, configured to determine, in response to a video processing instruction, a target sharpness corresponding to an original video frame based on a resolution of the original video frame;
a sharpening module 802, configured to sharpen the original video frame based on the sharpness of the target to obtain a target video frame;
and the video processing module 803 is configured to perform video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
Optionally, the determining module 801 includes:
the first determining unit is used for carrying out image recognition on the original video frame and determining the image content corresponding to the original video frame;
and the second determining unit is used for determining the target sharpness corresponding to the original video frame based on the image content and the resolution, and different image contents correspond to different sharpness.
Optionally, the second determining unit is further configured to:
determining at least one first sharpness corresponding to the resolution based on the resolution;
determining the target sharpness from the first sharpness based on the image content.
Optionally, the original video frame includes at least two image contents;
the second determining unit is further configured to:
determining the first sharpness corresponding to the image content with the highest content priority as the target sharpness;
or the like, or, alternatively,
and determining at least two target sharpness from the first sharpness based on at least two image contents, wherein different image contents are sharpened by adopting different target sharpness.
Optionally, the second determining unit is further configured to:
determining at least one second sharpness corresponding to the image content based on the image content;
determining the target sharpness from the second sharpness based on the resolution.
Optionally, the video processing instruction is a video display instruction;
the video processing module comprises:
and the display unit is used for displaying the target video frame based on the video display instruction.
Optionally, the video processing instruction is a video encoding instruction;
the video processing module comprises:
and the video coding unit is used for carrying out video coding on the target video frame based on the video coding instruction.
Optionally, the apparatus further comprises:
the storage module is used for storing the candidate resolution range and the candidate sharpness corresponding to the candidate resolution range in a corresponding relation table in a related manner;
the determining module 801 includes:
a third determining unit, configured to determine, based on the correspondence table and the resolution, a target resolution range corresponding to the resolution;
a fourth determining unit, configured to determine the candidate sharpness corresponding to the target resolution range as the target sharpness.
Optionally, the third determining unit is further configured to:
determining a candidate resolution range as the target resolution range in response to a candidate resolution range including the resolution existing in the correspondence table;
or the like, or, alternatively,
responding to the situation that no candidate resolution range containing the resolution exists in the corresponding relation table, and acquiring the maximum resolution corresponding to each candidate resolution range;
and in response to the maximum resolution being smaller than the resolution and the resolution difference between the maximum resolution and the resolution being minimum, determining a candidate resolution range corresponding to the maximum resolution as the target resolution range.
In the embodiment of the application, in a video processing scene, the resolution corresponding to an original video frame is obtained, and the original video frame is sharpened according to the target sharpness corresponding to the resolution, so that the display effect of the original video frame is improved; the sharpness of the target adopted for sharpening the original video is determined by the resolution corresponding to the original video frame, so that the problems of whitening, mosaic and the like caused by overhigh sharpness and poor definition improving effect caused by overlow sharpness are avoided while the definition of the original video frame is improved, and the video playing effect is intelligently adjusted.
Referring to fig. 9, a block diagram of a terminal 900 according to an exemplary embodiment of the present application is shown. The terminal 900 may be an electronic device in which an application is installed and run, such as a smart phone, a tablet computer, an electronic book, a portable personal computer, and the like. Terminal 900 in the present application may include one or more of the following components: a processor 902, a memory 901, and a screen 903.
The processor 902 may include one or more processing cores. The processor 1102 interfaces with various parts throughout the terminal 900 using various interfaces and lines to perform various functions of the terminal 900 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 901 and invoking data stored in the memory 901. Alternatively, the processor 902 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 902 may integrate one or a combination of a CPU, GPU, modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the screen 903; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the processor 902, but may be implemented solely by a communication chip.
The Memory 901 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 901 includes a non-transitory computer-readable medium. The memory 901 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 901 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, and the like), instructions for implementing the above method embodiments, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The stored data area may also store data created by terminal 900 during use (e.g., phone book, audio-visual data, chat log data), etc.
The screen 903 is used to receive a touch operation of a user thereon or near using any suitable object such as a finger, a touch pen, or the like, and to display a user interface of each application. The touch display screen is generally provided at a front panel of the terminal 900. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configuration of terminal 900 illustrated in the above-identified figures is not meant to be limiting with respect to terminal 900, and that terminal may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components. For example, the terminal 900 further includes a radio frequency circuit, a shooting component, a sensor, an audio circuit, a Wireless Fidelity (WiFi) component, a power supply, a bluetooth component, and other components, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, which stores at least one instruction, where the at least one instruction is used for being executed by a processor to implement the video processing method according to the above embodiments.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the video processing method provided in the above-mentioned alternative implementation.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of video processing, the method comprising:
in response to a video processing instruction, determining the sharpness of a target corresponding to an original video frame based on the resolution of the original video frame;
sharpening the original video frame based on the sharpness of the target to obtain a target video frame;
and performing video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
2. The method of claim 1, wherein determining the sharpness of the object corresponding to the original video frame based on the resolution of the original video frame comprises:
carrying out image recognition on the original video frame, and determining image content corresponding to the original video frame;
and determining the target sharpness corresponding to the original video frame based on the image content and the resolution, wherein different image contents correspond to different sharpness.
3. The method of claim 2, wherein said determining the target sharpness for the original video frame based on the image content and the resolution comprises:
determining at least one first sharpness corresponding to the resolution based on the resolution;
determining the target sharpness from the first sharpness based on the image content.
4. The method of claim 3, wherein the original video frame comprises at least two image contents;
the determining the target sharpness from the first sharpness based on the image content comprises:
determining the first sharpness corresponding to the image content with the highest content priority as the target sharpness;
or the like, or, alternatively,
and determining at least two target sharpness from the first sharpness based on at least two image contents, wherein different image contents are sharpened by adopting different target sharpness.
5. The method of claim 2, wherein said determining the target sharpness for the original video frame based on the image content and the resolution comprises:
determining at least one second sharpness corresponding to the image content based on the image content;
determining the target sharpness from the second sharpness based on the resolution.
6. The method according to any one of claims 1 to 5, wherein the video processing instruction is a video display instruction;
the video processing of the target video frame based on the video processing mode indicated by the video processing instruction includes:
and displaying the target video frame based on the video display instruction.
7. The method according to any one of claims 1 to 5, wherein the video processing instruction is a video encoding instruction;
the video processing of the target video frame based on the video processing mode indicated by the video processing instruction includes:
and performing video coding on the target video frame based on the video coding instruction.
8. The method of any of claims 1 to 5, wherein before determining the target sharpness corresponding to an original video frame based on a resolution of the original video frame in response to a video processing instruction, the method further comprises:
storing the candidate resolution range and the candidate sharpness association corresponding to the candidate resolution range in a corresponding relation table;
the determining the sharpness of the target corresponding to the original video frame based on the resolution of the original video frame comprises:
determining a target resolution range corresponding to the resolution ratio based on the corresponding relation table and the resolution ratio;
and determining the candidate sharpness corresponding to the target resolution range as the target sharpness.
9. The method of claim 8, wherein determining the target resolution range corresponding to the resolution based on the correspondence table and the resolution comprises:
determining a candidate resolution range as the target resolution range in response to a candidate resolution range including the resolution existing in the correspondence table;
or the like, or, alternatively,
responding to the situation that no candidate resolution range containing the resolution exists in the corresponding relation table, and acquiring the maximum resolution corresponding to each candidate resolution range;
and in response to the maximum resolution being smaller than the resolution and the resolution difference between the maximum resolution and the resolution being minimum, determining a candidate resolution range corresponding to the maximum resolution as the target resolution range.
10. A video processing apparatus, characterized in that the apparatus comprises:
the determining module is used for responding to a video processing instruction and determining the sharpness of a target corresponding to an original video frame based on the resolution of the original video frame;
the sharpening processing module is used for sharpening the original video frame based on the target sharpness to obtain a target video frame;
and the video processing module is used for carrying out video processing on the target video frame based on the video processing mode indicated by the video processing instruction.
11. A terminal, characterized in that it comprises a processor and a memory, in which at least one program is stored, which is loaded and executed by the processor to implement the video processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium having stored thereon at least one instruction for execution by a processor to implement the video processing method of any of claims 1 to 9.
CN202110778835.XA 2021-07-09 2021-07-09 Video processing method, device, terminal and storage medium Active CN113507643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110778835.XA CN113507643B (en) 2021-07-09 2021-07-09 Video processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110778835.XA CN113507643B (en) 2021-07-09 2021-07-09 Video processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113507643A true CN113507643A (en) 2021-10-15
CN113507643B CN113507643B (en) 2023-07-07

Family

ID=78012065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110778835.XA Active CN113507643B (en) 2021-07-09 2021-07-09 Video processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113507643B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013206175A (en) * 2012-03-28 2013-10-07 Fujitsu Ltd Image determination device, image determination method and computer program for image determination
US20140085507A1 (en) * 2012-09-21 2014-03-27 Bruce Harold Pillman Controlling the sharpness of a digital image
CN108024103A (en) * 2017-12-01 2018-05-11 重庆贝奥新视野医疗设备有限公司 Image sharpening method and device
CN108683826A (en) * 2018-05-15 2018-10-19 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN109640167A (en) * 2018-11-27 2019-04-16 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111402165A (en) * 2020-03-18 2020-07-10 展讯通信(上海)有限公司 Image processing method, device, equipment and storage medium
CN111970510A (en) * 2020-07-14 2020-11-20 浙江大华技术股份有限公司 Video processing method, storage medium and computing device
CN112950491A (en) * 2021-01-26 2021-06-11 上海视龙软件有限公司 Video processing method and device
CN113079319A (en) * 2021-04-07 2021-07-06 杭州涂鸦信息技术有限公司 Image adjusting method and related equipment thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013206175A (en) * 2012-03-28 2013-10-07 Fujitsu Ltd Image determination device, image determination method and computer program for image determination
US20140085507A1 (en) * 2012-09-21 2014-03-27 Bruce Harold Pillman Controlling the sharpness of a digital image
CN108024103A (en) * 2017-12-01 2018-05-11 重庆贝奥新视野医疗设备有限公司 Image sharpening method and device
CN108683826A (en) * 2018-05-15 2018-10-19 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN109640167A (en) * 2018-11-27 2019-04-16 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111402165A (en) * 2020-03-18 2020-07-10 展讯通信(上海)有限公司 Image processing method, device, equipment and storage medium
CN111970510A (en) * 2020-07-14 2020-11-20 浙江大华技术股份有限公司 Video processing method, storage medium and computing device
CN112950491A (en) * 2021-01-26 2021-06-11 上海视龙软件有限公司 Video processing method and device
CN113079319A (en) * 2021-04-07 2021-07-06 杭州涂鸦信息技术有限公司 Image adjusting method and related equipment thereof

Also Published As

Publication number Publication date
CN113507643B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
US10536730B2 (en) Method for processing video frames, video processing chip, and motion estimation/motion compensation MEMC chip
CN109729405B (en) Video processing method and device, electronic equipment and storage medium
US9538171B2 (en) Techniques for streaming video quality analysis
US20210281718A1 (en) Video Processing Method, Electronic Device and Storage Medium
WO2020108018A1 (en) Game scene processing method and apparatus, electronic device, and storage medium
CN109688465B (en) Video enhancement control method and device and electronic equipment
JP7295950B2 (en) Video enhancement control method, device, electronic device and storage medium
CN109120988B (en) Decoding method, decoding device, electronic device and storage medium
US11153525B2 (en) Method and device for video enhancement, and electronic device using the same
CN115089966A (en) Video rendering method and system applied to cloud game and related equipment
WO2020108010A1 (en) Video processing method and apparatus, electronic device and storage medium
CN110858388B (en) Method and device for enhancing video image quality
US11051080B2 (en) Method for improving video resolution and video quality, encoder, and decoder
CN109120979B (en) Video enhancement control method and device and electronic equipment
CN113507643A (en) Video processing method, device, terminal and storage medium
WO2023016191A1 (en) Image display method and apparatus, computer device, and storage medium
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN114390307A (en) Image quality enhancement method, device, terminal and readable storage medium
CN116962742A (en) Live video image data transmission method, device and live video system
CN109218803B (en) Video enhancement control method and device and electronic equipment
CN114079823A (en) Video rendering method, device, equipment and medium based on Flutter
EP3806026A1 (en) Method and apparatus for enhancing video image quality
CN110677728A (en) Method, device and equipment for playing video and storage medium
CN112399196B (en) Image processing method and device
CN109257636B (en) Switching method and device for video enhancement, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant