CN110662115A - Video processing method and device, electronic equipment and storage medium - Google Patents
Video processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110662115A CN110662115A CN201910944758.3A CN201910944758A CN110662115A CN 110662115 A CN110662115 A CN 110662115A CN 201910944758 A CN201910944758 A CN 201910944758A CN 110662115 A CN110662115 A CN 110662115A
- Authority
- CN
- China
- Prior art keywords
- video
- video stream
- processing
- video processing
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 176
- 238000012545 processing Methods 0.000 claims abstract description 643
- 238000000034 method Methods 0.000 claims abstract description 108
- 238000009877 rendering Methods 0.000 claims abstract description 59
- 230000002708 enhancing effect Effects 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 90
- 230000008569 process Effects 0.000 claims description 81
- 230000000007 visual effect Effects 0.000 abstract description 37
- 230000003044 adaptive effect Effects 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 16
- 238000012216 screening Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 239000003623 enhancer Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 241000533950 Leucojum Species 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440263—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure relates to a video processing method, a video processing apparatus, an electronic device, and a storage medium. The method comprises the following steps: receiving a video stream transmitted by a network; enhancing the video stream through a preset video processing algorithm to obtain a target video stream; rendering and playing the target video stream; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream. By performing adaptive enhancement processing on the video stream before rendering and playing, the beneficial effects of improving the quality of the video stream and the visual effect of the video picture are achieved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
With the rise of mobile internet and the development of video industry, video service scenes of mobile terminals are increasing. The subjective quality of the video at the mobile terminal will directly affect the viewing experience of the user. The subjective quality of a video refers to the subjective feeling of a user on the viewed video, and includes the definition, brightness, saturation, contrast, whether the video is stuck, whether a block effect exists, whether snowflakes exist, whether a picture is coherent and the like.
Some factors that affect the mobile end user viewing experience may sum up as: 1. quality of the original video. The better the quality of the original video, the clearer the video is seen by the user. 2. Network conditions. When the network condition is better, the video seen by the user is closer to the original video, and when the network condition is not good, the situations of picture blockage, blocking effect, snowflake screen or partial picture loss and the like can occur. 3. And (4) a player. In the case that the original video quality and the network condition are the same, the player itself will also affect the viewing experience of the user. The pictures rendered by the player may have different definitions, brightness, saturation, contrast, etc.
However, in the related art, the mobile end player generally adopts fixed picture rendering parameters adjusted in advance, and does not perform other enhancement processing on the video. However, the quality of different video data may also be different, which may result in poor visual effect of the final rendered video picture.
Disclosure of Invention
The present disclosure provides a video processing method, an apparatus, an electronic device, and a storage medium, so as to at least solve the problem in the related art that the visual effect of a video image is not good due to the inability to flexibly perform enhancement processing on video data. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
receiving a video stream transmitted by a network;
enhancing the video stream through a preset video processing algorithm to obtain a target video stream;
rendering and playing the target video stream;
and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
Optionally, the step of performing enhancement processing on the video stream through a preset video processing algorithm to obtain a target video stream includes:
performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
Optionally, the step of obtaining a target video processing method for processing the video stream from the video processing algorithm according to the content detection result includes:
acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream;
and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame.
Optionally, the step of rendering and playing the target video stream includes:
when a definition adjusting instruction input by a user is received in the playing process of the video stream, performing enhancement processing on the video stream;
rendering and playing the video stream after the enhancement processing.
Optionally, when a definition adjustment instruction input by a user is received in the playing process of the target video stream, the step of performing enhancement processing on the target video stream includes:
receiving gesture operation of a user in the playing process of the target video stream;
and executing a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
Optionally, when a definition adjustment instruction input by a user is received in the playing process of the target video stream, the step of performing enhancement processing on the target video stream includes:
calling and displaying a control panel in the playing process of the target video stream;
receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user;
and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the target video stream.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
receiving a video stream transmitted by a network;
when a definition adjusting instruction input by a user is received in the playing process of the video stream, performing enhancement processing on the video stream;
rendering and playing the video stream after the enhancement processing.
Optionally, when a definition adjustment instruction input by a user is received in a playing process of the video stream, the step of performing enhancement processing on the video stream includes:
receiving gesture operation of a user in the playing process of the video stream;
and executing a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation.
Optionally, when a definition adjustment instruction input by a user is received in a playing process of the video stream, the step of performing enhancement processing on the video stream includes:
calling and displaying a control panel in the playing process of the video stream;
receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user;
and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the video stream.
Optionally, before the step of rendering and playing the enhanced video stream, the method further includes:
enhancing the video stream through a preset video processing algorithm; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
Optionally, the step of performing enhancement processing on the video stream through a preset video processing algorithm includes:
performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method.
Optionally, the step of obtaining a target video processing method for processing the video stream from the video processing algorithm according to the content detection result includes:
acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream;
and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame.
According to a third aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
a video stream acquisition module configured to perform receiving a video stream transmitted by a network;
the first enhancement processing module is configured to execute enhancement processing on the video stream through a preset video processing algorithm to obtain a target video stream;
a rendering and playing module configured to render and play the target video stream;
and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
Optionally, the first enhancement processing module includes:
the first algorithm screening submodule is configured to perform content detection on the video stream, acquire a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determine a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
the second algorithm screening submodule is configured to execute a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determine the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and the first enhancement processing submodule is configured to perform enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
Optionally, the first algorithm screening submodule includes:
a first algorithm obtaining unit configured to perform a target video processing method for performing unified processing on the video stream from the video processing algorithms according to all content detection results corresponding to the video stream;
a second algorithm obtaining unit configured to perform, for each video frame in the video stream, obtaining a target video processing method for processing the video frame from the video processing algorithm according to a content detection result of the video frame.
Optionally, the rendering and playing module includes:
the picture enhancement submodule is configured to perform enhancement processing on the target video stream when receiving a definition adjustment instruction input by a user in the playing process of the target video stream;
and the rendering and playing submodule is configured to perform rendering and play the video stream after the enhancement processing.
Optionally, the picture enhancement sub-module includes:
a gesture operation receiving unit configured to receive a gesture operation of a user during the playing of the target video stream;
and the first picture enhancement unit is configured to execute a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
Optionally, the picture enhancement sub-module includes:
the control panel calling unit is configured to call and display a control panel in the playing process of the target video stream;
the adjustment parameter receiving unit is configured to execute the operation of receiving the user on the control panel and acquire definition adjustment parameters input by the user;
and the second picture enhancement unit is configured to execute a video processing algorithm according to the definition adjusting parameter and perform enhancement processing on the target video stream.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
a video stream acquisition module configured to perform receiving a video stream transmitted by a network;
the second enhancement processing module is configured to perform enhancement processing on the video stream when a definition adjusting instruction input by a user is received in the playing process of the video stream;
and the rendering and playing module is configured to perform rendering and play the video stream after the enhancement processing.
Optionally, the second enhanced processing module includes:
the gesture operation receiving submodule is configured to receive gesture operation of a user in the playing process of the video stream;
and the first picture enhancement submodule is configured to execute a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation.
Optionally, the second enhanced processing module includes:
the control panel calling submodule is configured to call and display a control panel in the playing process of the video stream;
the adjustment parameter receiving submodule is configured to execute the operation of receiving the user on the control panel and acquire definition adjustment parameters input by the user;
and the second picture enhancement submodule is configured to execute a video processing algorithm according to the definition adjusting parameter and perform enhancement processing on the video stream.
Optionally, the video processing apparatus further includes:
the first enhancement processing module is configured to execute enhancement processing on the video stream through a preset video processing algorithm; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
Optionally, the first enhancement processing module includes:
the first algorithm screening submodule is configured to perform content detection on the video stream, acquire a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determine a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
the second algorithm screening submodule is configured to execute a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determine the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
a first enhancement processing sub-module configured to perform enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method.
Optionally, the first algorithm screening submodule includes:
a first algorithm obtaining unit configured to perform a target video processing method for performing unified processing on the video stream from the video processing algorithms according to all content detection results corresponding to the video stream;
a second algorithm obtaining unit configured to perform, for each video frame in the video stream, obtaining a target video processing method for processing the video frame from the video processing algorithm according to a content detection result of the video frame.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement any one of the video processing methods as described above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions that, when executed by a processor of an electronic device, enable the electronic device to perform any one of the video processing methods as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which when executed by a processor of an electronic device, enables the electronic device to perform any one of the video processing methods as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: in the disclosed embodiment, a video stream transmitted through a receiving network; enhancing the video stream through a preset video processing algorithm to obtain a target video stream; rendering and playing the target video stream; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream. The quality of the video stream and the visual effect of the video picture are improved by performing self-adaptive enhancement processing on the video stream before rendering and playing.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is one of the flow diagrams of a video processing method according to an example embodiment.
Fig. 2 is a second flowchart illustrating a video processing method according to an exemplary embodiment.
Fig. 2A illustrates one of the schematic diagrams for setting the sharpness adjustment parameter according to an exemplary embodiment.
Fig. 2B shows a second schematic diagram of setting the sharpness adjustment parameter according to an exemplary embodiment.
Fig. 3 is a third flowchart illustrating a video processing method according to an example embodiment.
Fig. 4 is a fourth flowchart illustrating a method of video processing according to an example embodiment.
Fig. 5 is a fifth flowchart illustrating a method of video processing according to an exemplary embodiment.
Fig. 6 is one of block diagrams illustrating a video processing apparatus according to an exemplary embodiment.
Fig. 7 is a second block diagram of a video processing apparatus according to an example embodiment.
Fig. 8 is a third block diagram of a video processing apparatus according to an example embodiment.
Fig. 9 is a fourth block diagram of a video processing apparatus according to an example embodiment.
Fig. 10 is a fourth block diagram of a video processing apparatus according to an example embodiment.
FIG. 11 is a block diagram illustrating an apparatus in accordance with an example embodiment.
FIG. 12 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a video processing method according to an exemplary embodiment, where the video processing method may be used in an electronic device such as a mobile phone or a notebook, as shown in fig. 1, and includes the following steps.
In step S11, a video stream transmitted by the network is received.
In the embodiment of the disclosure, a playing architecture is provided in which video post-processing is added to a video player of a user terminal, so that before rendering and playing a video stream, the video stream is enhanced by a preset video processing algorithm, thereby improving a visual effect of the final rendering and playing of the video stream, and improving the experience of a user watching the video. First, a video stream to be rendered and played, which is transmitted by a network, needs to be received, and in the embodiment of the present disclosure, the video stream may be obtained in any available manner, which is not limited to the embodiment of the present disclosure.
For example, the video stream may be a video stream acquired by a camera in real time, or a video stream stored in a video server, and so on, and accordingly, the video stream sent by the camera or the video server may be received. Moreover, if the initially acquired video stream is encoded in advance before the video stream is transmitted to the user terminal for rendering and playing the video stream by the camera, the video server and the like, the user terminal receives the encoded video stream corresponding to the original video stream at this time, and then the encoded video stream can be decoded by a corresponding decoder, so that the decoded video stream is obtained and used as the video stream received by the corresponding user terminal and used as the video stream which needs to be enhanced currently; if the user terminal receives the video stream without encoding, the decoding process for the received video stream may not be required.
In addition, in the embodiment of the present disclosure, before the original video stream obtained by initial acquisition is encoded, some simple preprocessing may be performed on the original video stream in advance, but the preprocessing at this time cannot estimate the situation of the video stream during subsequent actual display, so that it cannot be adjusted in a targeted manner, and in the process of encoding and decoding, some negative effects may be generated on the video stream, for example, video data may be damaged. Therefore, in the embodiment of the present disclosure, the currently received video stream may be further enhanced and adjusted before being rendered and played.
In step S12, performing enhancement processing on the video stream through a preset video processing algorithm to obtain a target video stream; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
In order to improve the visual effect of the video stream during display, the video stream can be enhanced through a preset video processing algorithm before video rendering, so that a target video stream is obtained.
For example, the process of playing video at a user terminal such as a mobile phone, a computer, etc. may include the following steps: after being preprocessed and coded, the video stream is transmitted to a user terminal through a network, and the user terminal decodes the received coded video stream to obtain the video stream to be rendered and played in a player of the user terminal.
In order to improve the visual effect of the rendered and played video, the video stream may be enhanced by a preset video processing algorithm before being rendered and played. The video processing algorithm may include any at least one available video processing algorithm, and may specifically be preset according to a requirement, which is not limited to the embodiment of the present disclosure.
For example, the enhancement processing may include, but is not limited to, sharpening, contrast adjustment, saturation adjustment, brightness adjustment, resizing scaling, denoising, flaw removal or compensation, and the like. Accordingly, the preset video processing algorithm may include, but is not limited to, a sharpening processing algorithm, a contrast adjusting algorithm, a saturation adjusting algorithm, a brightness adjusting algorithm, a resizing scaling processing algorithm, a denoising processing algorithm, a flaw removal or compensation processing algorithm, and the like.
In addition, specific values of video processing parameters in different video processing algorithms can be set according to video streams, so that different degrees of video processing can be performed on different video streams based on the quality of the video streams, and different enhancement processing can be realized.
The video processing parameters in different video processing algorithms are not completely the same, and specifically, the video processing parameters specifically included in the video processing algorithms may be set at the same time when the video processing algorithms are set, which does not limit the embodiment of the present disclosure.
For example, the video processing parameters in the contrast adjustment algorithm may include target contrast, contrast parameters, and the like; the video processing parameters of the saturation adjustment algorithm may include target packet saturation, etc.; the video processing parameters of the brightness adjustment algorithm may include target brightness, etc.; the video processing parameters of the resizing scaling algorithm may then include the target cropping size, the magnification scale, etc.
For example, for a sharpening algorithm, each original video frame in a video stream may be low-pass filtered, then subtracted from the original video frame to obtain a residual, and the residual is weighted and then added to the original video frame, assuming that the original input video frame is I and the low-pass filtering function is flow-passThe weighting coefficient is a, then the sharpened video frame is Isharp=I+a(I-flow-pass(I) ). The low-pass filter function and the weighting coefficient may affect a final sharpening effect, that is, a target sharpening effect, and therefore, in the disclosed embodiment, the low-pass filter function and the weighting coefficient may be used as video processing parameters of a sharpening processing algorithm, and further, the low-pass filter function and the weighting coefficient may be set according to a video stream.
For contrast adjustment algorithm, the contrast reduction function is assumed to be:
Iconstrastwhere the pixel values of the input video frame are assumed to be at [0.0, 1.0 ], I-0.5 × β +0.5,0.0,1.0]Meanwhile, if the pixel value of the input video frame is at [0,255 ]]Then each pixel value may be divided by 255 as input to the above formula. The function clamp (x, a, B) is to set the value of input x larger than B as B, the value smaller than a as a, and β is the contrast parameter. The parameters β and the like therein may all affect the finally obtained contrast processing effect, that is, the target contrast effect, and therefore in the disclosed embodiment, the parameters β and the like may be used as video processing parameters of a contrast processing algorithm, and further the parameters β and the like may be set according to a video stream.
Of course, in the embodiment of the present disclosure, partial video processing parameters may also be set according to a video stream. For example, some video processing parameters may be set on demand or empirically, while other video processing parameters may be set on a video stream, and so forth.
For example, for the sharpening algorithm, a fixed low-pass filter function may be set empirically, and weighting coefficients may be set according to the video stream, and so on. For the contrast adjustment algorithm described above, the parameter β and the like may be set according to the video stream.
In step S13, the target video stream is rendered and played.
After the enhanced target video stream is obtained, the target video stream may be rendered and displayed. In the embodiment of the present disclosure, the target video stream may be rendered and presented in any available manner, and the embodiment of the present disclosure is not limited thereto.
For example, the target video stream may be rendered and presented by a video player of the user terminal, or a video player inside an application program, or the like.
In the disclosed embodiment, a video stream transmitted through a receiving network; enhancing the video stream through a preset video processing algorithm to obtain a target video stream; rendering and playing the target video stream; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream. The quality of the video stream and the visual effect of the video picture are improved by performing self-adaptive enhancement processing on the video stream before rendering and playing.
Referring to fig. 2, in an embodiment of the present disclosure, the step S12 may further include:
step S121, performing content detection on the video stream, obtaining a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method.
In addition, in the embodiment of the present disclosure, different adaptive enhancement adjustments may be performed on different video frames in a video stream by adaptively adjusting a post-processing algorithm and parameters according to different situations of the different video frames in the video stream. For example, if a certain video frame is dark, brightness of the certain video frame can be increased by a little intensity, or a certain video frame is fuzzy, and sharpening of a certain intensity can be performed on the certain video frame. Through self-adaptive adjustment, different videos can achieve a good subjective effect after being played.
At this time, content detection needs to be performed on the video stream, and then according to the content detection result, a target video processing method for processing the corresponding video stream is obtained from each preset video processing algorithm, and a specific value of a video processing parameter in each target video processing method is determined. Content detection is understood to mean detecting any specific content parameter in the video stream, such as brightness value, sharpness, contrast, size, noise value, etc., related to video quality, video visual effect, etc. Moreover, in the embodiments of the present disclosure, content detection may be performed on a video stream in any available manner, and the embodiments of the present disclosure are not limited thereto.
Moreover, when content detection is performed, content detection can be performed in units of video frames, and at this time, a content detection result of each video frame in the video stream can be obtained, so that a target video processing method for processing each video frame in the video stream can be obtained from a preset video processing algorithm according to the content detection result of each video frame, and a value of a video processing parameter of the target video processing method is determined.
And/or step S122, obtaining a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining a value of a video processing parameter of the target video processing method;
in addition, in practical applications, the same video may be displayed and played in different environments, and in some special scenes, the display effect of the video may be affected. For example, video presentation may be affected in environments where the light is too dark or too bright. Therefore, in the embodiment of the present disclosure, a target video processing method for processing a video stream may be further obtained from a preset video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and a value of a video processing parameter of the corresponding target video processing method is determined, so as to adjust the video stream by automatic enhancement, and further adjust a picture of a rendered video, thereby achieving an optimal viewing experience.
The target video processing methods corresponding to different light intensities and the values of the video processing parameters in the target video processing methods can be preset according to requirements or experience, and the embodiment of the disclosure is not limited.
In addition, in the embodiment of the present disclosure, if the two manners described above are considered simultaneously to obtain the values of the target video processing method and the video processing parameters thereof, and if the same target video processing method is obtained for the same video frame for multiple times, the values of the video processing parameters in the same target video processing method for multiple times may be considered comprehensively, and the same target video processing methods for multiple times are combined into one time. The specific merging mode can be set by user according to the requirement, and the embodiment of the disclosure is not limited. Of course, in the embodiment of the present disclosure, the merging process may not be performed on the same target video processing method for multiple times, and the embodiment of the present disclosure is not limited thereto.
For example, for a video frame a in a video stream, it is assumed that the target video processing method obtained according to the content detection result thereof includes a brightness adjustment algorithm, and the video processing parameter includes a target brightness value of 100, and if the target video processing method for processing the video stream is obtained according to the light intensity of the environment where the video player corresponding to the video stream is located also includes a brightness adjustment algorithm, and the video processing parameter of the brightness adjustment algorithm includes a target brightness value of 80, then at this time, two corresponding brightness enhancement processes of the video frame a may be merged into one brightness enhancement process, and the target brightness value in the video processing parameter of the merged brightness adjustment algorithm may be 100 or 180, and so on. And assuming that the target video processing method obtained according to the content detection result does not include the brightness adjustment algorithm for the video frame B in the video stream, the video processing parameter of the corresponding brightness adjustment algorithm for the video frame B includes the target brightness value of 80, and so on.
And step S123, performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
After the target video processing algorithm for adaptively adjusting the video stream is confirmed, the video stream can be enhanced correspondingly according to the target video processing algorithm and the video processing parameters of different target video processing methods, so as to obtain the target video stream. Specifically, the enhancement processing may be performed for each video frame in the video stream in units of video frames, so as to obtain the target video stream.
For example, for the video frame a and the video frame B described above, the video frame a may be subjected to luminance enhancement processing with a target luminance value of 100, while the video frame B may be subjected to luminance enhancement processing with a target luminance value of 80, and so on.
Optionally, in an embodiment of the present disclosure, the step S121 further includes:
step A1, according to the detection result of all the contents corresponding to the video stream, obtaining a target video processing method for uniformly processing the video stream from the video processing algorithm, and determining the value of the video processing parameter of the target video processing method.
Step A2, for each video frame in the video stream, according to the content detection result of the video frame, obtaining a target video processing method for processing the video frame from the video processing algorithm, and determining the value of the video processing parameter of the target video processing method.
After the content detection result of the video stream is obtained, a target video processing method of each video frame can be respectively obtained by taking the video frame as a unit according to the content detection result of each video frame, and the value of the video processing parameter of the target video processing method is determined. Or counting the overall quality index of the video stream according to the detection result of all the contents corresponding to the video stream, further acquiring a target video processing method for uniformly processing the video stream from a preset video processing algorithm, and determining the value of the video processing parameter of the target video processing method.
For example, if the overall brightness of the video stream is statistically dark or too bright according to the detection results of all the contents corresponding to the video stream, the brightness enhancement processing may be uniformly performed on each video frame in the video stream, and the corresponding brightness adjustment parameter is determined; or, when detecting that the noise in the decoded video stream is heavy according to the detection result of all the contents corresponding to the video stream, then performing a unified denoising process on the video stream, and setting corresponding denoising parameters, and the like.
Referring to fig. 2, in an embodiment of the present disclosure, the step S13 may further include:
step S131, when a definition adjusting instruction input by a user is received in the playing process of the target video stream, performing enhancement processing on the target video stream;
and step S132, rendering and playing the enhanced target video stream.
In addition, in the process of watching the video by the user, since the visual experiences and the visual requirements of different users are not completely consistent, and therefore, different users may have different requirements for the pictures played by rendering the same target video, in the embodiment of the present disclosure, in order to further improve the visual experience of the user watching the video and approach the visual requirements of the user, in the process of playing the target video stream by the video player, the video data that is not played by rendering currently in the target video stream may be further enhanced according to the definition adjustment instruction received from the user. The user may input the sharpness adjustment instruction in any available manner, which is not limited in the embodiment of the present invention. For example, the user may input different sharpness adjustment instructions through different preset gestures, or input different sharpness adjustment instructions through different language contents.
In addition, in the embodiment of the present disclosure, when the video stream is subjected to enhancement processing, because the video stream is already in the process of being displayed and a part of the video data is rendered and displayed in the process of this display, the enhancement processing performed on the video data that is rendered and played does not improve the visual experience of the user under the condition that the user does not look back, and therefore, only the video data that is not rendered and played currently in the video stream can be subjected to enhancement processing. Of course, if the user review is considered, the enhancement processing may be performed on all video streams, and the embodiment of the present disclosure is not limited thereto.
Therefore, enhancement processing can be always performed on the video data which is not rendered and played currently in the target video stream according to the latest received definition adjusting instruction, and then the video stream after the enhancement processing is rendered and played. Therefore, when the subsequent video data are continuously displayed according to the time sequence in the display process of the target video stream, the display effect of the subsequent video data can meet the current visual demand of the user.
In the embodiment of the present disclosure, the sharpness adjustment instruction may be input in any available manner, and the embodiment of the present disclosure is not limited thereto. Moreover, the primary sharpness adjustment instruction may control enhancement processing of the video stream by the at least one preset video processing algorithm, and may specifically perform custom setting according to requirements, which is not limited in the embodiment of the present disclosure.
Optionally, in an embodiment of the present disclosure, the step S131 further includes:
step B1, receiving gesture operation of a user in the playing process of the target video stream;
and step B2, executing a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
In the embodiment of the disclosure, the user can control the execution of the video processing algorithm through gesture operation to perform enhancement processing on the target video stream. Moreover, the user may input the gesture operation in any available manner, which is not limited to the embodiment of the present invention.
For example, a user can control a video processing algorithm to perform enhancement processing on the target video stream by using a special gesture on a screen of a user terminal while watching a video; or, the current limb movement of the user can be collected by the camera to be used as gesture operation; and so on. Alternatively, the gesture operation may be performed by a combination of one or more of the above manners, which does not limit the embodiment of the present disclosure. The corresponding relationship between the gesture motion and the video processing algorithm may be preset according to the requirement, and the embodiment of the present invention is not limited.
For example, when watching a video, a user may control and execute a sharpness adjustment algorithm in a video processing algorithm by circling clockwise on a screen, perform enhancement processing on the target video stream, and the more turns of a continuous drawing, the higher the sharpness is, as shown in fig. 2A; alternatively, the user may control the execution of the brightness adjustment algorithm in the video processing algorithm by continuously touching the screen with a finger on the screen and sliding up and down, as shown in fig. 2B, and so on; or when the hand movement of the user is collected through the camera to move from bottom to top, the contrast can be correspondingly increased; and so on. At the moment, different definition adjusting parameters can be directly set in the background through gesture operation, and then a video processing algorithm is executed to perform enhancement processing on the target video stream according to the definition adjusting parameters.
In addition, in the embodiment of the present disclosure, the control panel may be correspondingly popped up through the gesture operation of the user, so that the user can automatically adjust the definition adjustment parameter through the control panel. For example, when a preset gesture operation is received, the corresponding user terminal may be controlled to pop up the control panel, and so on.
Optionally, in an embodiment of the present disclosure, the step S131 further includes:
step C1, in the process of playing the target video stream, calling and displaying a control panel;
step C2, receiving the operation of the user on the control panel, and acquiring definition adjustment parameters input by the user;
and step C3, executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the target video stream.
In the embodiment of the disclosure, the definition adjustment parameter input by the user can be received through a preset control panel. Specifically, a control panel may be called and displayed in the playing process of the target video stream, so as to receive an operation of a user on the control panel, to obtain a definition adjustment parameter input by the user, and then execute a video processing algorithm according to the definition adjustment parameter to perform enhancement processing on the target video stream.
The control panel may be preset according to a requirement, and the control panel may be set in a video player for displaying a target video stream in a user terminal, or may be invoked in other manners, which is not limited in this embodiment of the disclosure. For example, a user may pop up a corresponding control panel on the user terminal through operation control, and then input the sharpness adjustment parameter by adjusting the value of each sharpness adjustment parameter included in the control panel. Thereby enabling the user to achieve a better viewing experience. The sharpness parameter may include any parameter related to video quality, picture quality, etc., such as the aforementioned contrast, brightness, noise, etc. The definition adjustment parameter input by the user is obtained, which is equivalent to determining the specific value of the definition adjustment parameter.
In the embodiment of the present disclosure, content detection may be performed on the video stream, and according to a content detection result, a target video processing method for processing the video stream is obtained from the video processing algorithm, and a value of a video processing parameter of the target video processing method is determined; and/or acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of the video processing parameter of the target video processing method; and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream. Therefore, different target video processing algorithms are matched for the video stream according to the content detection result of the video stream and/or the light intensity of the environment where the video player corresponding to the video stream is located, the values are taken according to the video processing parameters, the enhancement processing effect of the video stream is further improved, and the video quality of the obtained target video stream and the visual effect of the video picture are improved.
Moreover, in the embodiment of the present disclosure, a target video processing method for performing unified processing on the video stream may also be obtained from the video processing algorithm according to all content detection results corresponding to the video stream; and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame. Therefore, different target video processing algorithms are matched for each frame in the video stream to obtain values according to the video processing parameters, the enhancement processing effect of each frame in the video stream is further improved, and the video quality of the obtained target video stream and the visual effect of the video picture are improved.
In addition, in the embodiment of the present disclosure, when a definition adjustment instruction input by a user is received in the playing process of the target video stream, enhancement processing may be performed on the target video stream; and rendering and playing the enhanced target video stream. Therefore, the video stream can be enhanced according to the real-time visual requirements of different users in the display process of the target video stream, and the video quality and the fitting degree of the video display picture and the visual requirements of the users can be further improved.
Receiving gesture operation of a user in the display process of the target video stream and the playing process of the target video stream; and executing a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation. Calling and displaying a control panel in the playing process of the target video stream; receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user; and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the target video stream. The method and the device can facilitate the user to perform real-time enhancement processing on the target video stream at any time, and provide more diversified video watching experience for the user.
Fig. 3 is a flow diagram illustrating a video processing method according to an example embodiment, which may include the following steps, as shown in fig. 3.
And step S21, receiving the video stream transmitted by the network.
Step S22, when receiving a definition adjustment instruction input by a user during the playing process of the video stream, performing enhancement processing on the video stream.
And step S23, rendering and playing the video stream after the enhancement processing.
In the embodiment of the present disclosure, in a process of watching a video by a user, since visual experiences and visual requirements of different users are not completely consistent, different users may have different requirements for a picture played by rendering the same target video, and therefore, in the embodiment of the present disclosure, in order to improve visual experiences of the user when watching the video, the video stream is enhanced when a definition adjustment instruction input by the user is received in a playing process of the video stream in order to meet the visual requirements of the user. The definition adjusting parameters can be set and adjusted by a watching user of the corresponding target video stream in the display process according to visual perception and requirements of the watching user. Furthermore, in the embodiments of the present disclosure, the sharpness adjustment parameter may be received in any available manner, and the embodiments of the present disclosure are not limited thereto.
In addition, in the embodiment of the present disclosure, when the video stream is subjected to enhancement processing, because the video stream is already in the process of being displayed and a part of the video data is rendered and displayed in the process of this display, the enhancement processing performed on the video data that is rendered and played does not improve the visual experience of the user under the condition that the user does not look back, and therefore, only the video data that is not rendered and played currently in the video stream can be subjected to enhancement processing. Of course, if the user review is considered, the enhancement processing may be performed on all video streams, and the embodiment of the present disclosure is not limited thereto.
And then in the process of rendering and playing the video stream, rendering and displaying can be carried out according to the video stream which is currently subjected to enhancement processing, so that the picture display effect of the video stream can be adjusted in real time according to the definition adjustment parameter which is set in real time by a watching user in the process of displaying the video stream.
In the disclosed embodiment, a video stream transmitted through a receiving network; when a definition adjusting instruction input by a user is received in the playing process of the video stream, performing enhancement processing on the video stream; rendering and playing the video stream after the enhancement processing. Therefore, the video stream is enhanced according to the definition adjusting parameter set by the user, and the attaching degree of the video picture of the video stream and the visual requirement of the user can be improved.
Referring to fig. 4, in an embodiment of the present disclosure, the step S22 may further include:
step M1, receiving gesture operation of a user in the playing process of the video stream;
and step M2, executing a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation.
Referring to fig. 5, in an embodiment of the present disclosure, the step S22 may further include:
step N1, calling and displaying a control panel in the playing process of the video stream;
step N2, receiving the operation of the user on the control panel, and acquiring definition adjustment parameters input by the user;
and step N3, executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the video stream.
Referring to fig. 4, in the embodiment of the present disclosure, before the step S23, the method further includes:
step S24, enhancing the video stream through a preset video processing algorithm; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
In the embodiment of the present disclosure, the enhancement processing may be performed on the video stream through a preset video processing algorithm. In addition, at this time, for different video streams, video processing parameters in the video processing algorithm may be preset according to requirements, and for different video streams, values of the video processing parameters in the same video processing algorithm may be the same or may not be the same, which is not limited in this embodiment of the present disclosure.
In addition, in an actual application process, the video qualities of different video streams may not be completely the same, so that in order to perform adaptive enhancement processing on different video streams to improve the video qualities of different video streams as much as possible, when the video streams are subjected to enhancement processing by the video processing algorithm, values of video processing parameters in the corresponding video processing algorithm may be set for the video streams to be subjected to enhancement processing. The specific correspondence between the values of the video stream and the video processing parameters may be preset according to requirements or experience, and the embodiment of the present disclosure is not limited thereto.
In addition, at this time, all video data in the video stream may be enhanced, or only video data that is not currently rendered and played may be enhanced, which is also not limited in the embodiment of the present disclosure.
Optionally, in an embodiment of the present disclosure, the step S24 further may include:
step S241, performing content detection on the video stream, obtaining a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
step S242, according to the light intensity of the environment where the video player corresponding to the video stream is located, obtaining a target video processing method for processing the video stream from the video processing algorithm, and determining a value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and step S243, performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method.
Optionally, in an embodiment of the present disclosure, the step S241 may further include:
step S2411, acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream;
step S2412, aiming at each video frame in the video stream, according to the content detection result of the video frame, acquiring a target video processing method for processing the video frame from the video processing algorithm.
In the embodiment of the disclosure, in the playing process of the video stream, receiving gesture operation of a user; and executing a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation. Or, in the playing process of the video stream, calling and displaying a control panel; receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user; and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the video stream. Therefore, the video stream can be enhanced according to the real-time visual requirements of different users in the display process of the target video stream, and the video quality and the fitting degree of the video display picture and the visual requirements of the users can be further improved.
Moreover, in the embodiment of the present disclosure, the enhancement processing may be performed on the video stream through a preset video processing algorithm. And when the video stream is enhanced by the video processing algorithm, the value of the video processing parameter in the video processing algorithm is set according to the video stream. Performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of the video processing parameter of the target video processing method; and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method. Acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream; and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame. Therefore, different target video processing algorithms are matched for each frame in the video stream to obtain values according to the video processing parameters, the enhancement processing effect of each frame in the video stream is further improved, and the video quality of the obtained target video stream and the visual effect of the video picture are improved.
Fig. 6 is a block diagram illustrating a video processing device according to an example embodiment. Referring to fig. 6, the apparatus includes a video stream acquisition module 31, a first enhancement processing module 32, and a render play module 33.
A video stream acquisition module 31 configured to perform receiving a video stream transmitted by a network.
A first enhancement processing module 32, configured to perform enhancement processing on the video stream through a preset video processing algorithm to obtain a target video stream; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
And a rendering and playing module 33 configured to perform rendering and playing of the target video stream.
In the disclosed embodiment, a video stream transmitted through a receiving network; enhancing the video stream through a preset video processing algorithm to obtain a target video stream; rendering and playing the target video stream; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream. The quality of the video stream and the visual effect of the video picture are improved by performing self-adaptive enhancement processing on the video stream before rendering and playing.
Referring to fig. 7, in the embodiment of the present disclosure, the first enhancement processing module 32 may further include:
a first algorithm screening submodule 321 configured to perform content detection on the video stream, acquire a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determine a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
a second algorithm screening submodule 322, configured to execute, according to the light intensity of the environment where the video player corresponding to the video stream is located, acquiring, from the video processing algorithm, a target video processing method for processing the video stream, and determining a value of a video processing parameter of the target video processing method; and the number of the first and second groups,
a first enhancement sub-module 323 configured to perform enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
Optionally, in this embodiment of the disclosure, the first algorithm screening submodule 321 may further include:
a first algorithm obtaining unit configured to perform a target video processing method for performing unified processing on the video stream from the video processing algorithms according to all content detection results corresponding to the video stream;
a second algorithm obtaining unit configured to perform, for each video frame in the video stream, obtaining a target video processing method for processing the video frame from the video processing algorithm according to a content detection result of the video frame.
Referring to fig. 7, in the embodiment of the present disclosure, the rendering and playing module 33 may further include:
the picture enhancer module 331 is configured to perform enhancement processing on the target video stream when a definition adjustment instruction input by a user is received in the playing process of the target video stream;
and a rendering and playing sub-module 332 configured to perform rendering and play the enhanced video stream.
Optionally, in an embodiment of the present disclosure, the picture enhancer module 331 further includes:
a gesture operation receiving unit configured to receive a gesture operation of a user during the playing of the target video stream;
and the first picture enhancement unit is configured to execute a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
Optionally, in an embodiment of the present disclosure, the picture enhancer module 331 further includes:
the control panel calling unit is configured to call and display a control panel in the playing process of the target video stream;
the adjustment parameter receiving unit is configured to execute the operation of receiving the user on the control panel and acquire definition adjustment parameters input by the user;
and the second picture enhancement unit is configured to execute a video processing algorithm according to the definition adjusting parameter and perform enhancement processing on the target video stream.
In the embodiment of the present disclosure, content detection may be performed on the video stream, and according to a content detection result, a target video processing method for processing the video stream is obtained from the video processing algorithm, and a value of a video processing parameter of the target video processing method is determined; and/or acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of the video processing parameter of the target video processing method; and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream. Therefore, different target video processing algorithms are matched for the video stream according to the content detection result of the video stream and/or the light intensity of the environment where the video player corresponding to the video stream is located, the values are taken according to the video processing parameters, the enhancement processing effect of the video stream is further improved, and the video quality of the obtained target video stream and the visual effect of the video picture are improved.
Moreover, in the embodiment of the present disclosure, a target video processing method for performing unified processing on the video stream may also be obtained from the video processing algorithm according to all content detection results corresponding to the video stream; and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame. Therefore, different target video processing algorithms are matched for each frame in the video stream to obtain values according to the video processing parameters, the enhancement processing effect of each frame in the video stream is further improved, and the video quality of the obtained target video stream and the visual effect of the video picture are improved.
In addition, in the embodiment of the present disclosure, when a definition adjustment instruction input by a user is received in the playing process of the target video stream, enhancement processing may be performed on the target video stream; rendering and playing the video stream after the enhancement processing. Therefore, the video stream can be enhanced according to the real-time visual requirements of different users in the display process of the target video stream, and the video quality and the fitting degree of the video display picture and the visual requirements of the users can be further improved.
Receiving gesture operation of a user in the playing process of the target video stream; and executing a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation. Or, in the playing process of the target video stream, calling and displaying a control panel; receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user; and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the target video stream. The method and the device can facilitate the user to perform real-time enhancement processing on the target video stream at any time, and provide more diversified video watching experience for the user.
Fig. 8 is a block diagram illustrating another video processing device according to an example embodiment. Referring to fig. 8, the apparatus includes a video stream acquisition module 41, a second enhancement processing module 42, and a render play module 43.
A video stream acquisition module 41 configured to perform receiving a video stream transmitted by a network.
And the second enhancement processing module 42 is configured to perform enhancement processing on the video stream when a definition adjustment instruction input by a user is received in the playing process of the video stream.
And a rendering and playing module 43 configured to perform rendering and play the enhanced video stream.
In the disclosed embodiment, a video stream transmitted through a receiving network; when a definition adjusting instruction input by a user is received in the playing process of the video stream, performing enhancement processing on the video stream; rendering and playing the video stream after the enhancement processing. Therefore, the video stream is enhanced according to the definition adjusting parameter set by the user, and the attaching degree of the video picture of the video stream and the visual requirement of the user can be improved.
Referring to fig. 9, in the embodiment of the present disclosure, the second enhancement processing module 42 may further include:
the gesture operation receiving submodule 421 is configured to receive a gesture operation of a user in the playing process of the video stream;
a first frame enhancement module 422 configured to perform enhancement processing on the video stream by executing a video processing algorithm based on the gesture operation.
Referring to fig. 10, in the embodiment of the present disclosure, the second enhancement processing module 42 may further include:
the control panel calling submodule 423 is configured to call and display a control panel in the playing process of the video stream;
an adjustment parameter receiving submodule 424, configured to perform receiving operation of a user on the control panel, and obtain a definition adjustment parameter input by the user;
and a second picture enhancement sub-module 425 configured to perform a video processing algorithm according to the sharpness adjustment parameter to perform enhancement processing on the video stream.
An enhancement processing sub-module 425 configured to perform enhancement processing on the video stream by a video processing method corresponding to the sharpness adjustment parameter.
Referring to fig. 9, in an embodiment of the present disclosure, the video processing apparatus may further include:
and the first enhancement processing module 44 is configured to execute enhancement processing on the video stream through a preset video processing algorithm.
Optionally, in this embodiment of the present disclosure, the first enhancement processing module 44 further may include:
the first algorithm screening submodule is configured to perform content detection on the video stream, acquire a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determine a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
the second algorithm screening submodule is configured to execute a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determine the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
a first enhancement processing sub-module configured to perform enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method.
Optionally, in this embodiment of the disclosure, the first algorithm screening sub-module further may include:
a first algorithm obtaining unit configured to perform a target video processing method for performing unified processing on the video stream from the video processing algorithms according to all content detection results corresponding to the video stream;
a second algorithm obtaining unit configured to perform, for each video frame in the video stream, obtaining a target video processing method for processing the video frame from the video processing algorithm according to a content detection result of the video frame.
In the embodiment of the disclosure, in the playing process of the video stream, receiving gesture operation of a user; and executing a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation. Or, in the playing process of the video stream, calling and displaying a control panel; receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user; and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the video stream. Therefore, the video stream can be enhanced according to the real-time visual requirements of different users in the display process of the target video stream, and the video quality and the fitting degree of the video display picture and the visual requirements of the users can be further improved.
Moreover, in the embodiment of the present disclosure, the enhancement processing may be performed on the video stream through a preset video processing algorithm. And when the video stream is enhanced by the video processing algorithm, the value of the video processing parameter in the video processing algorithm is set according to the video stream. Performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of the video processing parameter of the target video processing method; and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method. Acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream; and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame. Therefore, different target video processing algorithms are matched for each frame in the video stream to obtain values according to the video processing parameters, the enhancement processing effect of each frame in the video stream is further improved, and the video quality of the obtained target video stream and the visual effect of the video picture are improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement any one of the video processing methods as described above.
The embodiments of the present disclosure also provide a storage medium, and when instructions in the storage medium are executed by a processor of an electronic device, the electronic device is enabled to execute any one of the video processing methods as described above.
Embodiments of the present disclosure also provide a computer program product, which when executed by a processor of an electronic device, enables the electronic device to perform any one of the video processing methods as described above.
Fig. 11 is a block diagram illustrating an electronic device 500 for video processing in accordance with an exemplary embodiment. For example, the electronic device 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, a video camera, a camera, and so forth.
Referring to fig. 11, electronic device 500 may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the electronic device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on the electronic device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the electronic device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 500.
The multimedia component 508 includes a screen that provides an output interface between the electronic device 500 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the electronic device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of components, such as a display and keypad of the electronic device 500, the sensor assembly 514 may detect a change in the position of the electronic device 500 or a component of the electronic device 500, the presence or absence of user contact with the electronic device 500, orientation or acceleration/deceleration of the electronic device 500, and a change in the temperature of the electronic device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate wired or wireless communication between the electronic device 500 and other devices. The electronic device 500 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the electronic device 500 to perform the above-described method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 12 is a block diagram illustrating an electronic device 600 for video processing in accordance with an example embodiment. For example, the electronic device 600 may be provided as a server. Referring to fig. 12, electronic device 600 includes a processing component 622 that further includes one or more processors, and memory resources, represented by memory 632, for storing instructions, such as applications, that are executable by processing component 622. The application programs stored in memory 632 may include one or more modules that each correspond to a set of instructions. Further, the processing component 622 is configured to execute instructions to perform any of the video processing methods described above.
The electronic device 600 may also include a power component 626 configured to perform power management for the electronic device 600, a wired or wireless network interface 650 configured to connect the electronic device 600 to a network, and an input/output (I/O) interface 658. The electronic device 600 may operate based on an operating system stored in memory 632, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
According to a first aspect of the embodiments of the present disclosure, a1, a video processing method is disclosed, including:
receiving a video stream transmitted by a network;
enhancing the video stream through a preset video processing algorithm to obtain a target video stream;
rendering and playing the target video stream;
and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
A2, the method as recited in a1, wherein the step of performing enhancement processing on the video stream by a preset video processing algorithm to obtain a target video stream includes:
performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
A3, the method of A2, wherein the step of obtaining a target video processing method for processing the video stream from the video processing algorithm according to the content detection result comprises:
acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream;
and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame.
A4, the method according to any one of A1-A3, wherein the step of rendering and playing the target video stream comprises:
when a definition adjusting instruction input by a user is received in the playing process of the target video stream, performing enhancement processing on the target video stream;
and rendering and playing the enhanced target video stream.
A5, the method as in a4, wherein the step of performing enhancement processing on the video stream when receiving a definition adjustment instruction input by a user during playing of the video stream includes:
receiving gesture operation of a user in the playing process of the target video stream;
and executing a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
A6, the method as in a4, wherein the step of performing enhancement processing on the target video stream when receiving a definition adjustment instruction input by a user during playing of the target video stream includes:
calling and displaying a control panel in the playing process of the target video stream;
receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user;
and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the target video stream.
According to a second aspect of the embodiments of the present disclosure, B7, a video processing method is disclosed, including:
receiving a video stream transmitted by a network;
when a definition adjusting instruction input by a user is received in the playing process of the video stream, performing enhancement processing on the video stream;
rendering and playing the video stream after the enhancement processing.
B8, the method according to B7, wherein the step of performing enhancement processing on the video stream when receiving the definition adjustment instruction input by the user during the playing process of the video stream comprises:
receiving gesture operation of a user in the playing process of the video stream;
and executing a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation.
B9, the method according to B7, wherein the step of performing enhancement processing on the video stream when receiving the definition adjustment instruction input by the user during the playing process of the video stream comprises:
calling and displaying a control panel in the playing process of the video stream;
receiving the operation of a user on the control panel, and acquiring definition adjustment parameters input by the user;
and executing a video processing algorithm according to the definition adjusting parameter, and performing enhancement processing on the video stream.
B10, the method according to any of B7-B9, further comprising, before the step of rendering and playing the enhanced video stream:
enhancing the video stream through a preset video processing algorithm; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
B11, the method according to B10, wherein the step of performing enhancement processing on the video stream by a preset video processing algorithm comprises:
performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method.
B12, the method of B11, wherein the step of obtaining a target video processing method for processing the video stream from the video processing algorithm according to the content detection result comprises:
acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream;
and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame.
According to a third aspect of the embodiments of the present disclosure, a C13 is disclosed, which includes:
a video stream acquisition module configured to perform receiving a video stream transmitted by a network;
the first enhancement processing module is configured to execute enhancement processing on the video stream through a preset video processing algorithm to obtain a target video stream;
a rendering and playing module configured to render and play the target video stream;
and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
C14, the apparatus of C13, the first enhanced processing module comprising:
the first algorithm screening submodule is configured to perform content detection on the video stream, acquire a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determine a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
the second algorithm screening submodule is configured to execute a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determine the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and the first enhancement processing submodule is configured to perform enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
C15, the apparatus of C14, the first algorithm filter submodule comprising:
a first algorithm obtaining unit configured to perform a target video processing method for performing unified processing on the video stream from the video processing algorithms according to all content detection results corresponding to the video stream;
a second algorithm obtaining unit configured to perform, for each video frame in the video stream, obtaining a target video processing method for processing the video frame from the video processing algorithm according to a content detection result of the video frame.
C16, the apparatus as claimed in any one of C13-C15, the render play module comprising:
the picture enhancement submodule is configured to perform enhancement processing on the target video stream when receiving a definition adjustment instruction input by a user in the playing process of the target video stream;
and the rendering and playing submodule is configured to perform rendering and play the video stream after the enhancement processing.
C17, the apparatus of C16, the picture enhancer module comprising:
a gesture operation receiving unit configured to receive a gesture operation of a user during the playing of the target video stream;
and the first picture enhancement unit is configured to execute a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
C18, the apparatus of C16, the picture enhancer module comprising:
the control panel calling unit is configured to call and display a control panel in the playing process of the target video stream;
the adjustment parameter receiving unit is configured to execute the operation of receiving the user on the control panel and acquire definition adjustment parameters input by the user;
and the second picture enhancement unit is configured to execute a video processing algorithm according to the definition adjusting parameter and perform enhancement processing on the target video stream.
According to a fourth aspect of the embodiments of the present disclosure, a D19, a video processing apparatus is disclosed, comprising:
a video stream acquisition module configured to perform receiving a video stream transmitted by a network;
the second enhancement processing module is configured to perform enhancement processing on the video stream when a definition adjusting instruction input by a user is received in the playing process of the video stream;
and the rendering and playing module is configured to perform rendering and play the video stream after the enhancement processing.
D20, the apparatus as in D19, the second enhancement processing module comprising:
the gesture operation receiving submodule is configured to receive gesture operation of a user in the playing process of the video stream;
and the first picture enhancement submodule is configured to execute a video processing algorithm to perform enhancement processing on the video stream based on the gesture operation.
D21, the apparatus as in D19, the second enhancement processing module comprising:
the control panel calling submodule is configured to call and display a control panel in the playing process of the video stream;
the adjustment parameter receiving submodule is configured to execute the operation of receiving the user on the control panel and acquire definition adjustment parameters input by the user;
and the second picture enhancement submodule is configured to execute a video processing algorithm according to the definition adjusting parameter and perform enhancement processing on the video stream.
D22, the device of any one of D19-D21, the video processing device further comprising:
the first enhancement processing module is configured to execute enhancement processing on the video stream through a preset video processing algorithm; and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
D23, the apparatus of D22, the first enhancement processing module comprising:
the first algorithm screening submodule is configured to perform content detection on the video stream, acquire a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determine a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
the second algorithm screening submodule is configured to execute a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determine the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
a first enhancement processing sub-module configured to perform enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method.
D24, the apparatus of D23, the first algorithm filter submodule comprising:
a first algorithm obtaining unit configured to perform a target video processing method for performing unified processing on the video stream from the video processing algorithms according to all content detection results corresponding to the video stream;
a second algorithm obtaining unit configured to perform, for each video frame in the video stream, obtaining a target video processing method for processing the video frame from the video processing algorithm according to a content detection result of the video frame.
According to a fifth aspect of the embodiments of the present disclosure, there is disclosed an E25, an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any one of A1-A12.
According to a sixth aspect of the embodiments of the present disclosure, F26, a storage medium is disclosed, wherein instructions that, when executed by a processor of an electronic device, enable the electronic device to perform a video processing method as recited in any one of a1 to a 12.
Claims (10)
1. A video processing method, comprising:
receiving a video stream transmitted by a network;
enhancing the video stream through a preset video processing algorithm to obtain a target video stream;
rendering and playing the target video stream;
and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
2. The method according to claim 1, wherein the step of performing enhancement processing on the video stream by using a preset video processing algorithm to obtain a target video stream comprises:
performing content detection on the video stream, acquiring a target video processing method for processing the video stream from the video processing algorithm according to a content detection result, and determining a value of a video processing parameter of the target video processing method; and/or the presence of a gas in the gas,
acquiring a target video processing method for processing the video stream from the video processing algorithm according to the light intensity of the environment where the video player corresponding to the video stream is located, and determining the value of a video processing parameter of the target video processing method; and the number of the first and second groups,
and performing enhancement processing on the video stream according to the target video processing algorithm and the video processing parameters of the target video processing method to obtain a target video stream.
3. The method according to claim 2, wherein the step of obtaining a target video processing method for processing the video stream from the video processing algorithm according to the content detection result comprises:
acquiring a target video processing method for uniformly processing the video stream from the video processing algorithm according to all content detection results corresponding to the video stream;
and aiming at each video frame in the video stream, acquiring a target video processing method for processing the video frame from the video processing algorithm according to the content detection result of the video frame.
4. The method according to any one of claims 1-3, wherein the step of rendering and playing the target video stream comprises:
when a definition adjusting instruction input by a user is received in the playing process of the target video stream, performing enhancement processing on the target video stream;
and rendering and playing the enhanced target video stream.
5. The method according to claim 4, wherein the step of performing enhancement processing on the video stream when receiving a definition adjustment instruction input by a user during playing of the video stream comprises:
receiving gesture operation of a user in the playing process of the target video stream;
and executing a video processing algorithm to perform enhancement processing on the target video stream based on the gesture operation.
6. A video processing method, comprising:
receiving a video stream transmitted by a network;
when a definition adjusting instruction input by a user is received in the playing process of the video stream, performing enhancement processing on the video stream;
rendering and playing the video stream after the enhancement processing.
7. A video processing apparatus, comprising:
a video stream acquisition module configured to perform receiving a video stream transmitted by a network;
the first enhancement processing module is configured to execute enhancement processing on the video stream through a preset video processing algorithm to obtain a target video stream;
a rendering and playing module configured to render and play the target video stream;
and when the video stream is subjected to enhancement processing through the video processing algorithm, the value of the video processing parameter in the video processing algorithm is obtained by setting according to the video stream.
8. A video processing apparatus, comprising:
a video stream acquisition module configured to perform receiving a video stream transmitted by a network;
the second enhancement processing module is configured to perform enhancement processing on the video stream when a definition adjusting instruction input by a user is received in the playing process of the video stream;
and the rendering and playing module is configured to perform rendering and play the video stream after the enhancement processing.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video processing method of any of claims 1 to 6.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910944758.3A CN110662115B (en) | 2019-09-30 | 2019-09-30 | Video processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910944758.3A CN110662115B (en) | 2019-09-30 | 2019-09-30 | Video processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110662115A true CN110662115A (en) | 2020-01-07 |
CN110662115B CN110662115B (en) | 2022-04-22 |
Family
ID=69038460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910944758.3A Active CN110662115B (en) | 2019-09-30 | 2019-09-30 | Video processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110662115B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111901561A (en) * | 2020-07-16 | 2020-11-06 | 苏州科达科技股份有限公司 | Video data processing method, device and system in monitoring system and storage medium |
CN113852860A (en) * | 2021-09-26 | 2021-12-28 | 北京金山云网络技术有限公司 | Video processing method, device, system and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060268180A1 (en) * | 2005-05-31 | 2006-11-30 | Chih-Hsien Chou | Method and system for automatic brightness and contrast adjustment of a video source |
CN102158699A (en) * | 2011-03-28 | 2011-08-17 | 广州市聚晖电子科技有限公司 | Embedded video compression coding system with image enhancement function |
US20120120181A1 (en) * | 2010-11-15 | 2012-05-17 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
CN105635524A (en) * | 2015-12-18 | 2016-06-01 | 成都国翼电子技术有限公司 | Intelligent enhancement method based on dark region histogram area statistics of history frame image |
CN105744118A (en) * | 2016-02-01 | 2016-07-06 | 杭州当虹科技有限公司 | Video enhancing method based on video frame self-adaption and video enhancing system applying the video enhancing method based on video frame self-adaption |
US20160253944A1 (en) * | 2015-02-27 | 2016-09-01 | Boe Technology Group Co., Ltd. | Image Display Method and Device and Electronic Apparatus |
US20160360145A1 (en) * | 2013-12-27 | 2016-12-08 | Le Holdings (Beijing) Co., Ltd. | Image quality adjustment method and system |
CN107205125A (en) * | 2017-06-30 | 2017-09-26 | 广东欧珀移动通信有限公司 | A kind of image processing method, device, terminal and computer-readable recording medium |
CN107948733A (en) * | 2017-12-04 | 2018-04-20 | 腾讯科技(深圳)有限公司 | Method of video image processing and device, electronic equipment |
CN108933954A (en) * | 2017-05-22 | 2018-12-04 | 中兴通讯股份有限公司 | Method of video image processing, set-top box and computer readable storage medium |
CN109151573A (en) * | 2018-09-30 | 2019-01-04 | Oppo广东移动通信有限公司 | Video source modeling control method, device and electronic equipment |
CN109379628A (en) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | Video processing method, apparatus, electronic device and computer readable medium |
CN109525901A (en) * | 2018-11-27 | 2019-03-26 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109544441A (en) * | 2018-11-09 | 2019-03-29 | 广州虎牙信息科技有限公司 | Colour of skin processing method and processing device in image processing method and device, live streaming |
CN109640167A (en) * | 2018-11-27 | 2019-04-16 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and storage medium |
-
2019
- 2019-09-30 CN CN201910944758.3A patent/CN110662115B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060268180A1 (en) * | 2005-05-31 | 2006-11-30 | Chih-Hsien Chou | Method and system for automatic brightness and contrast adjustment of a video source |
US20120120181A1 (en) * | 2010-11-15 | 2012-05-17 | Cisco Technology, Inc. | System and method for providing enhanced graphics in a video environment |
CN102158699A (en) * | 2011-03-28 | 2011-08-17 | 广州市聚晖电子科技有限公司 | Embedded video compression coding system with image enhancement function |
US20160360145A1 (en) * | 2013-12-27 | 2016-12-08 | Le Holdings (Beijing) Co., Ltd. | Image quality adjustment method and system |
US20160253944A1 (en) * | 2015-02-27 | 2016-09-01 | Boe Technology Group Co., Ltd. | Image Display Method and Device and Electronic Apparatus |
CN105635524A (en) * | 2015-12-18 | 2016-06-01 | 成都国翼电子技术有限公司 | Intelligent enhancement method based on dark region histogram area statistics of history frame image |
CN105744118A (en) * | 2016-02-01 | 2016-07-06 | 杭州当虹科技有限公司 | Video enhancing method based on video frame self-adaption and video enhancing system applying the video enhancing method based on video frame self-adaption |
CN108933954A (en) * | 2017-05-22 | 2018-12-04 | 中兴通讯股份有限公司 | Method of video image processing, set-top box and computer readable storage medium |
CN107205125A (en) * | 2017-06-30 | 2017-09-26 | 广东欧珀移动通信有限公司 | A kind of image processing method, device, terminal and computer-readable recording medium |
CN107948733A (en) * | 2017-12-04 | 2018-04-20 | 腾讯科技(深圳)有限公司 | Method of video image processing and device, electronic equipment |
CN109151573A (en) * | 2018-09-30 | 2019-01-04 | Oppo广东移动通信有限公司 | Video source modeling control method, device and electronic equipment |
CN109544441A (en) * | 2018-11-09 | 2019-03-29 | 广州虎牙信息科技有限公司 | Colour of skin processing method and processing device in image processing method and device, live streaming |
CN109379628A (en) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | Video processing method, apparatus, electronic device and computer readable medium |
CN109525901A (en) * | 2018-11-27 | 2019-03-26 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and computer-readable medium |
CN109640167A (en) * | 2018-11-27 | 2019-04-16 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
HUIMIN LU: "Underwater scene enhancement using weighted guided median filter", 《2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)》 * |
刘峰等: "基于暗原色先验的低照度视频增强算法", 《计算机系统应用》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111901561A (en) * | 2020-07-16 | 2020-11-06 | 苏州科达科技股份有限公司 | Video data processing method, device and system in monitoring system and storage medium |
CN113852860A (en) * | 2021-09-26 | 2021-12-28 | 北京金山云网络技术有限公司 | Video processing method, device, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110662115B (en) | 2022-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10674088B2 (en) | Method and device for acquiring image, terminal and computer-readable storage medium | |
CN109859144B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110580688B (en) | Image processing method and device, electronic equipment and storage medium | |
US11490157B2 (en) | Method for controlling video enhancement, device, electronic device and storage medium | |
CN110728180B (en) | Image processing method, device and storage medium | |
CN112614064B (en) | Image processing method, device, electronic equipment and storage medium | |
CN104811609A (en) | Photographing parameter adjustment method and device | |
CN113139947B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111028792B (en) | Display control method and device | |
CN112785537B (en) | Image processing method, device and storage medium | |
CN106792255B (en) | Video playing window frame body display method and device | |
CN110662115B (en) | Video processing method and device, electronic equipment and storage medium | |
CN111050211B (en) | Video processing method, device and storage medium | |
WO2023071167A1 (en) | Image processing method and apparatus, and electronic device, storage medium and program product | |
CN113660531B (en) | Video processing method and device, electronic equipment and storage medium | |
CN104992416A (en) | Image enhancement method and device, and intelligent equipment | |
CN110971833B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112188095B (en) | Photographing method, photographing device and storage medium | |
CN106775246B (en) | Screen brightness adjusting method and device | |
CN109255839B (en) | Scene adjustment method and device | |
CN118071648A (en) | Video processing method, device, electronic equipment and storage medium | |
CN106604107B (en) | Subtitle processing method and device | |
CN114070998B (en) | Moon shooting method and device, electronic equipment and medium | |
CN113157178B (en) | Information processing method and device | |
CN110896492B (en) | Image processing method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |