CN112261438A - Video enhancement method, device, equipment and storage medium - Google Patents

Video enhancement method, device, equipment and storage medium Download PDF

Info

Publication number
CN112261438A
CN112261438A CN202011110789.8A CN202011110789A CN112261438A CN 112261438 A CN112261438 A CN 112261438A CN 202011110789 A CN202011110789 A CN 202011110789A CN 112261438 A CN112261438 A CN 112261438A
Authority
CN
China
Prior art keywords
video
pixel
value
brightness
values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011110789.8A
Other languages
Chinese (zh)
Other versions
CN112261438B (en
Inventor
夏海雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011110789.8A priority Critical patent/CN112261438B/en
Publication of CN112261438A publication Critical patent/CN112261438A/en
Application granted granted Critical
Publication of CN112261438B publication Critical patent/CN112261438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a video enhancement method, a video enhancement device, video enhancement equipment and a storage medium, and belongs to the field of video processing. According to the technical scheme provided by the embodiment of the application, the terminal can update the pixel value of each pixel point based on the highest brightness value and the lowest brightness value associated with the video frame in the video decoding process, wherein the process of updating the pixel value of the pixel point is also a video enhancement process. Because the highest brightness value and the lowest brightness value associated with different video frames are different, when the video enhancement is performed on different video frames, the highest brightness value and the lowest brightness value used by the terminal are also personalized, and different video frames can obtain different degrees of video enhancement. Of course, for different videos, the video enhancement can be performed by using different maximum brightness values and minimum brightness values for different videos, so as to improve the definition of the videos.

Description

Video enhancement method, device, equipment and storage medium
Technical Field
The present application relates to the field of video processing, and in particular, to a method, an apparatus, a device, and a storage medium for video enhancement.
Background
With the development of network technology, more and more users watch videos through various computer devices, and the users hope to see videos with higher definition in the process of watching the videos.
In the related art, in order to provide a video with higher definition to a user, a computer device performs video enhancement on a played video by using a video enhancement algorithm, so as to improve the definition of the video.
However, in the process of video enhancement, the same set of video enhancement algorithm is often adopted by the computer device for different videos. This may be the case when some videos are video-enhanced and then have poorer sharpness.
Disclosure of Invention
The embodiment of the application provides a video enhancement method, a video enhancement device, video enhancement equipment and a storage medium, which can improve the video enhancement effect and improve the definition of a video. The technical scheme is as follows:
in one aspect, a video enhancement method is provided, and the method includes:
acquiring a video to be decoded;
in the process of decoding any video frame in the video to be decoded, acquiring a highest brightness value and a lowest brightness value associated with the any video frame, wherein the highest brightness value and the lowest brightness value are respectively the highest brightness value and the lowest brightness value in the brightness values of a plurality of pixel points in the any video frame;
and updating the pixel values of the plurality of pixel points according to the highest brightness value and the lowest brightness value, wherein the brightness values indicated by the updated pixel values of the plurality of pixel points accord with target conditions.
In one aspect, a video enhancement apparatus is provided, the apparatus comprising:
the video acquisition module is used for acquiring a video to be decoded;
a luminance value obtaining module, configured to obtain a highest luminance value and a lowest luminance value associated with any video frame in the to-be-decoded video in a process of decoding the any video frame, where the highest luminance value and the lowest luminance value are a highest luminance value and a lowest luminance value, respectively, of luminance values of a plurality of pixel points in the any video frame;
and the pixel value updating module is used for updating the pixel values of the plurality of pixel points according to the highest brightness value and the lowest brightness value, and the brightness values indicated by the updated pixel values of the plurality of pixel points meet target conditions.
In a possible embodiment, the apparatus further comprises:
a brightness information file obtaining module, configured to obtain a brightness information file, where a highest brightness value and a lowest brightness value that are respectively associated with a plurality of video frames in the video to be decoded are stored in the brightness information file;
and the video acquisition module is further used for acquiring the highest brightness value and the lowest brightness value associated with any video frame from the brightness information file according to the identifier of any video frame.
In one possible embodiment, the generating device of the brightness information file comprises:
the pixel value acquisition module is used for acquiring pixel values of a plurality of pixel points in a plurality of video frames in the video to be decoded;
the brightness value determining module is used for obtaining the brightness values of a plurality of pixel points in a plurality of video frames according to the pixel values of the plurality of pixel points in the plurality of video frames; obtaining the highest brightness value and the lowest brightness value respectively associated with the plurality of video frames according to the brightness values of the plurality of pixel points in the plurality of video frames;
and the brightness information file generating module is used for generating the brightness information file according to the identifications of the video frames and the highest brightness value and the lowest brightness value which are respectively associated with the video frames.
In a possible embodiment, the apparatus further comprises:
and the video frame display module is used for displaying a target video frame, and the target video frame is a video frame formed by the plurality of pixel points after the pixel values are updated.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one program code stored therein, the program code being loaded and executed by the one or more processors to implement the video enhancement method.
In one aspect, a computer-readable storage medium having at least one program code stored therein is provided, the program code being loaded and executed by a processor to implement the video enhancement method.
In one aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer-readable storage medium, the computer program code being read by a processor of a computer device from the computer-readable storage medium, the computer program code being executable by the processor to cause the computer device to perform the video enhancement method described above.
According to the technical scheme provided by the embodiment of the application, the terminal can update the pixel value of each pixel point based on the highest brightness value and the lowest brightness value associated with the video frame in the video decoding process, wherein the process of updating the pixel value of the pixel point is also a video enhancement process. Because the highest brightness value and the lowest brightness value associated with different video frames are different, when the video enhancement is performed on different video frames, the highest brightness value and the lowest brightness value used by the terminal are also personalized, and different video frames can obtain different degrees of video enhancement. Of course, for different videos, the video enhancement can be performed by using different maximum brightness values and minimum brightness values for different videos, so as to improve the definition of the videos.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a video enhancement method provided by an embodiment of the present application;
fig. 2 is a flowchart of a video enhancement method provided in an embodiment of the present application;
fig. 3 is a flowchart of a video enhancement method provided in an embodiment of the present application;
fig. 4 is a flowchart of a video enhancement method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a histogram provided by an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a correspondence relationship between a video frame and a luminance information file according to an embodiment of the present application;
fig. 7 is a comparison graph of effects of a video enhancement method provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a video enhancement apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, "a plurality" means two or more, for example, a plurality of reference face images means two or more reference face images.
Cloud Computing (Cloud Computing) is a Computing model that distributes Computing tasks over a resource pool of large numbers of computers, enabling various application systems to obtain Computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand. As a basic capability provider of cloud computing, a cloud computing resource pool (called as a cloud Platform in general, an Infrastructure as a Service) Platform is established, and multiple types of virtual resources are deployed in the resource pool for selective use by external clients, the cloud computing resource pool mainly includes a computing device (including an operating system, for a virtualized machine), a storage device, and a network device, and is divided according to logical functions, a PaaS (Platform as a Service) layer may be deployed on an IaaS (Infrastructure as a Service) layer, a SaaS (Software as a Service) layer may be deployed on the PaaS layer, or the SaaS may be directly deployed on the IaaS layer, the PaaS may be a Platform running on Software, such as a web database, a container, and the like, as business Software of various websites, a web portal, and the like, SaaS and PaaS are upper layers relative to IaaS.
Cloud: a large number of computers in cloud computing are called the cloud.
Video coding: video is a continuous sequence of images, consisting of successive frames, a frame being an image. Due to the persistence of vision effect of the human eye, when a sequence of frames is played at a certain rate, the user sees a video with continuous motion. Because of the extremely high similarity between the continuous frames, the computer equipment can encode the original video to remove the redundancy of space and time dimensions and reduce the storage space occupied by the video for the convenience of storage and transmission.
Video decoding: the method is the reverse process of video coding, namely, the method for restoring the data after video coding into video.
Video enhancement: enhancing useful information in a video frame can be a process of distortion, which aims to improve the visual effect of the video frame.
Fig. 1 is a schematic diagram of an implementation environment of a video enhancement method according to an embodiment of the present disclosure, and referring to fig. 1, the implementation environment may include a terminal 110 and a server 140.
The terminal 110 is connected to the server 140 through a wireless network or a wired network. Optionally, the terminal 110 is a device such as a smart phone, a tablet computer, a smart television, a desktop computer, a vehicle computer, and a portable computer. The terminal 110 is installed and operated with an application program supporting video playback.
Optionally, the server 140 is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, distribution Network (CDN), big data and artificial intelligence platform, and the like. Optionally, the terminal 110 and the server 140 are directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
Optionally, the terminal 110 generally refers to one of a plurality of terminals, and the embodiment of the present application is illustrated by the terminal 110.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, only one terminal 110 is provided, or tens or hundreds of terminals are provided, or more, and other terminals are also included in the implementation environment. The number of terminals and the type of the device are not limited in the embodiments of the present application.
After the implementation environment provided by the embodiment of the present application is introduced, an application scenario of the present application is described below:
scene 1, the video enhancement method provided by the embodiment of the application can be applied to the scene of online video playing. When a user clicks a video to be watched through the online video playing software, the online video playing software can perform video enhancement on the video in real time through the online video enhancement method provided by the embodiment of the application in the process of playing the video, so that the definition of the video is improved, and a better film watching experience is provided for the user.
For example, referring to fig. 2, if the user requests a video a through the online video playing software, the online video playing software can send the identifier of the video a to the server, the server determines the position of the video a in the video database according to the identifier of the video a, and sends the video a to the online video playing software in a streaming media manner. And simultaneously sending the video A, the server can also send the brightness information file of the video frames in the video A to the online video client. After receiving the video A, the online video playing software decodes the video A in real time to obtain pixel values of a plurality of pixel points of each video frame in the video A. And the online video playing software updates the pixel values of a plurality of pixel points of the video frame according to the brightness information file, thereby realizing video enhancement and improving the definition of the video A.
Scene 2 and the video enhancement method provided by the embodiment of the present application can be applied to a live broadcast scene, in the following description, a terminal used by a main broadcast is simply referred to as a main broadcast end, and a terminal used by a viewer is simply referred to as a viewer end. During live broadcasting, the anchor often sets a delay to the live broadcasting, that is, the content that the viewer sees through the viewer end is a picture of the anchor several seconds or minutes ago. Because the live broadcast video stream is sent to the live broadcast server by the anchor terminal in real time, the live broadcast server can obtain the highest brightness value and the lowest brightness value of the video frame in the live broadcast video stream in real time by using the delay set by the anchor terminal to generate the brightness information file. The live broadcast server sends the live broadcast video stream and the brightness information file to the audience, and the audience performs video enhancement on the live broadcast video stream based on the brightness information file, so that the definition of the video is improved.
Scene 3 and the video enhancement method provided by the embodiment of the application can be applied to scenes in which offline videos are played. When a user wants to play an offline video stored on the terminal, the terminal can decode the video, and the terminal performs brightness value statistics on a video frame obtained by decoding to obtain the highest brightness value and the lowest brightness value of a plurality of pixel points in the video frame obtained by decoding. And after the highest brightness value and the lowest brightness value, the terminal updates the pixel values of a plurality of pixel points in the decoded video frame and displays the updated video frame to a user, thereby realizing the video enhancement process.
Scene 4 and the video enhancement method provided by the embodiment of the application can be applied to a monitoring scene. For example, the user may monitor the shopping mall by setting a monitoring device inside the shopping mall, or monitor the house by setting a monitoring device in the house. When an accident occurs, such as a shopping mall theft or a house theft, the user can review the situation of the accident through the monitoring video shot by the monitoring device. But due to the limitations of the monitoring device, there may be some videos whose content is not clear enough. At this time, the video enhancement method provided by the embodiment of the application is used for carrying out video enhancement on the monitoring video, so that the definition of the monitoring video can be improved, and a user can more clearly view the situation when an accident happens.
Scene 5 and the video enhancement method provided by the embodiment of the application can be applied to an aerial photography scene. When the user takes a photo by plane through the unmanned aerial vehicle, some videos with lower definition may be taken due to the equipment limitation of the unmanned aerial vehicle. By the video adding method provided by the embodiment of the application, the aerial video can be subjected to video enhancement, so that the definition of the aerial video is improved.
In the following description of the technical solutions provided in the embodiments of the present application, a terminal is taken as an example of an execution subject. In other possible embodiments, the server may also be executed as an execution subject, or executed through cooperation between a terminal and the server, and the embodiment of the present application is not limited to the type of the execution subject.
Fig. 3 is a flowchart of a video enhancement method provided in an embodiment of the present application, and referring to fig. 3, the method includes:
301. and the terminal acquires a video to be decoded.
Optionally, the video to be decoded is an online video, a live video, an offline video, a surveillance video, an aerial video, and other types of videos that need to be video-enhanced, and the type of the video to be decoded is not limited in the embodiment of the present application.
302. The method comprises the steps that a terminal obtains the highest brightness value and the lowest brightness value which are associated with a video frame in the process of decoding any video frame in a video to be decoded, wherein the highest brightness value and the lowest brightness value are respectively the highest brightness value and the lowest brightness value in the brightness values of a plurality of pixel points in the video frame.
Alternatively, the Luminance value is a Y (Luminance or Luma) value in the color coding method YUV, i.e., brightness, and U and V in YUV represent Chrominance (Chroma).
303. And the terminal updates the pixel values of the plurality of pixel points according to the highest brightness value and the lowest brightness value, and the brightness values indicated by the updated pixel values of the plurality of pixel points accord with the target condition.
Optionally, the fact that the brightness values indicated by the updated pixel values of the plurality of pixel points meet the target condition means any one of the following:
if the average value of the brightness values indicated by the pixel values of the plurality of pixel points before updating is less than one half of the highest brightness value and the lowest brightness value of the video frame, the average value of the brightness values indicated by the pixel values of the plurality of pixel points after updating is greater than a first brightness threshold, wherein the first brightness threshold is the average value of the brightness values indicated by the pixel values of the plurality of pixel points before updating.
If the average value of the brightness values indicated by the pixel values of the plurality of pixel points before updating is greater than one half of the highest brightness value and the lowest brightness value of the video frame, the average value of the brightness values indicated by the pixel values of the plurality of pixel points after updating is smaller than a second brightness threshold, wherein the second brightness threshold is the average value of the brightness values indicated by the pixel values of the plurality of pixel points before updating.
That is to say, with the technical solution provided in the embodiments of the present application, for some videos with lower brightness, the overall video brightness can be improved, and for some videos with higher brightness, the overall video brightness can be reduced, so as to improve the definition of the video.
According to the technical scheme provided by the embodiment of the application, the terminal can update the pixel value of each pixel point based on the highest brightness value and the lowest brightness value associated with the video frame in the video decoding process, wherein the process of updating the pixel value of the pixel point is also a video enhancement process. Because the highest brightness value and the lowest brightness value associated with different video frames are different, when the video enhancement is performed on different video frames, the highest brightness value and the lowest brightness value used by the terminal are also personalized, and different video frames can obtain different degrees of video enhancement. Of course, for different videos, the video enhancement can be performed by using different maximum brightness values and minimum brightness values for different videos, so as to improve the definition of the videos.
In the following description of the technical solutions provided in the embodiments of the present application, a terminal is taken as an example as an execution subject. In other possible embodiments, the execution may also be performed through cooperation between a terminal and a server, and the embodiment of the present application is not limited to the type of the execution subject.
Fig. 4 is a flowchart of a video enhancement method provided in an embodiment of the present application, and referring to fig. 4, the method includes:
401. and the terminal acquires a video to be decoded.
The type of the video to be decoded is described in step 301, and is not described herein again. In addition, in order to more clearly describe the video enhancement method provided by the embodiment of the present application, a video to be decoded is taken as a video on the cloud and a video to be decoded is taken as a local video respectively as an example to be described below:
taking a video to be decoded as a video on the cloud as an example:
if the video to be decoded is the online video stored on the server, the user can select the video to be watched through the online video playing software running on the terminal. And responding to the selection operation of the user on any online video, and triggering a video selection instruction by the terminal. And responding to the video selection instruction, and sending a video acquisition request to the server by the online playing software, wherein the video acquisition request carries the video identifier of the video selected by the user. The server receives the video acquisition request and acquires the video identification in the video acquisition request. And the server searches the video to be decoded corresponding to the video identification in the video database according to the video identification. And the server sends the video to be decoded to the terminal, and the terminal acquires the video to be decoded.
And if the video to be decoded is a live video, the anchor transmits the live video to the server in real time through the anchor end, and the server adds the identification of the live room of the anchor for the live video. The user can select the desired live room through the live software. And responding to the selection operation of the user on any live broadcast room, and triggering a live broadcast selection instruction by the terminal. And responding to a live broadcast room selection instruction, the live broadcast software sends a live broadcast video acquisition request to the server, wherein the live broadcast video acquisition request carries a live broadcast room identifier of the live broadcast room selected by the user. And the server receives the live video acquisition request and acquires the live room identification in the live video acquisition request. The server sends the live video corresponding to the live room identification to the terminal, and the terminal acquires the live video, namely the video to be decoded.
Taking the video to be decoded as the local video as an example:
if the video to be decoded is an offline video, when a user wants to watch the video in the disk, the video to be watched is selected through video playing software. And responding to the video selection operation of the user, and triggering a video selection instruction by the terminal, wherein the video selection instruction carries the storage position of the video selected by the user. And responding to the video selection instruction, and loading the video corresponding to the video selection instruction by the video playing software, so that the process of acquiring the video to be decoded by the terminal can be realized.
If the video to be decoded is the monitoring video, the user selects the monitoring video to be checked through the video monitoring software. And responding to the monitoring video selection operation of the user, and triggering a monitoring video selection instruction by the terminal, wherein the video selection instruction carries the storage position of the monitoring video selected by the user. And responding to the monitoring video selection instruction, and loading the monitoring video corresponding to the monitoring video selection instruction by the video monitoring software, so that the process of acquiring the video to be decoded by the terminal can be realized.
402. The terminal acquires a brightness information file, wherein the brightness information file stores a highest brightness value and a lowest brightness value which are respectively associated with a plurality of video frames in a video to be decoded.
Optionally, the brightness information file stores a brightness information table, the brightness information table is shown in table 1, and table 1 stores the identifier of the video frame, the highest brightness value of the video frame, and the lowest brightness value of the video frame.
TABLE 1
Video frame identification Maximum brightness value Minimum brightness value
n Maxn Minn
n+1 Maxn+1 Minn+1
…… …… ……
In a possible implementation manner, if the video to be decoded is the video on the cloud, the terminal acquires the brightness information file of the video to be decoded according to the video identifier of the video to be decoded.
For example, the terminal sends a luminance information file acquisition request to the server, where the luminance information file acquisition request carries a video identifier of a video to be decoded. And responding to the received brightness information file acquisition request, and acquiring the identifier of the video to be decoded from the brightness information acquisition request by the server. And the server acquires a brightness information file corresponding to the video to be decoded from the brightness information database according to the video identification of the video to be decoded. The server sends the brightness information file to the terminal, and the terminal acquires the brightness information file.
It should be noted that, the foregoing embodiment is exemplified by the terminal acquiring the video to be decoded and then acquiring the brightness information file, in other possible embodiments, the terminal can also acquire the brightness information file while acquiring the video to be decoded, so that the terminal can acquire the video to be decoded and the brightness information file simultaneously through one operation, thereby reducing the overhead of the terminal, and this embodiment will be described based on several examples:
if the video to be decoded is the online video stored on the server, the user can select the video to be watched through the online video playing software running on the terminal. And responding to the selection operation of the user on any online video, and triggering a video selection instruction by the terminal. And responding to the video selection instruction, and sending a video acquisition request to the server by the online playing software, wherein the video acquisition request carries the video identifier of the video selected by the user. The server receives the video acquisition request and acquires the video identification in the video acquisition request. And the server searches the video to be decoded and the brightness information file corresponding to the video identifier in the database according to the video identifier. The server sends the video to be decoded and the brightness information file to the terminal, and the terminal can also obtain the video to be decoded and the brightness information file at the same time.
And if the video to be decoded is a live video, the anchor transmits the live video to the server in real time through the anchor end, and the server adds the identification of the live room of the anchor for the live video. The user can select the desired live room through the live software. And responding to the selection operation for any live broadcast room, and triggering a live broadcast selection instruction by the terminal. And responding to a live broadcast room selection instruction, the live broadcast software sends a live broadcast video acquisition request to the server, wherein the live broadcast video acquisition request carries a live broadcast room identifier of the live broadcast room selected by the user. And the server receives the live video acquisition request and acquires the live room identification in the live video acquisition request. The server sends the live video and the brightness information file corresponding to the live broadcast room identification to the terminal, and the terminal can also obtain the video to be decoded and the brightness information file at the same time.
After the method for acquiring the brightness information file by the terminal is introduced, the method for generating the brightness information file provided by the embodiment of the present application is described below, and in the description process, the video to be decoded is divided into the cloud video and the local video.
If the video to be decoded is the video on the cloud end, the brightness information file can be generated on the cloud end, and the terminal directly obtains the brightness information file of the video to be decoded from the cloud end.
In a possible implementation manner, the server obtains pixel values of a plurality of pixel points in a plurality of video frames in the video to be decoded. The server obtains the brightness values of a plurality of pixel points in a plurality of video frames according to the pixel values of the plurality of pixel points in the plurality of video frames. And the server obtains the highest brightness value and the lowest brightness value respectively associated with the plurality of video frames according to the brightness values of the plurality of pixel points in the plurality of video frames. And the server generates a brightness information file according to the identifications of the video frames and the highest brightness value and the lowest brightness value which are respectively associated with the video frames.
For example, the server can obtain color (Red, Green, Blue) channel (RGB) values of a plurality of pixel points in a plurality of video frames in the video to be decoded. The server converts the color channel values of the plurality of pixel points into Luminance and Chrominance (YUV, Luminance/Chrominance) values, wherein the Y value is also the Luminance value of the pixel point. The server acquires the highest brightness value and the lowest brightness value which are respectively associated with the plurality of video frames from the brightness values of the plurality of pixel points in the plurality of video frames. And the server generates a brightness information file according to the video identifications of the video frames and the highest brightness value and the lowest brightness value which are respectively associated with the video frames.
In the following description, a method for determining the highest brightness value and the lowest brightness value of a plurality of pixel points in a plurality of video frames by a server is described, and for convenience of understanding, in the following description, the server is taken as an example to determine the highest brightness value and the lowest brightness value of a plurality of pixel points in one video frame.
For example, the server determines the RGB value of each pixel in the video frame, such as the RGB value of one pixel is (110, 70, 80), and the server determines the brightness value of each pixel in the video frame based on formula (1), such as the brightness value of one pixel is 85.
Y=aR+bG+cB (1)
Wherein, Y is the brightness value of the pixel, R is the red channel value, G is the green channel value, B is the blue channel value, a, B and c are the conversion coefficients respectively, a possible combination of a, B and c is: the values a, b, and c are 0.299, 0.587, and 0.144, but in other possible implementation environments, a, b, and c may be combined with other values, which is not limited in the embodiments of the present application.
The server performs histogram statistics on the brightness values of a plurality of pixel points in the video frame to obtain a histogram as shown in fig. 5, wherein the abscissa of the histogram represents the brightness value, and the ordinate represents the proportion of one brightness value in all the brightness values. For example, there are 4 pixels in the video frame, and the luminance values of the 4 pixels are 70, 85, 90, and 85, respectively. Then the number of luminance values 75 accounts for 25% of the 4 luminance values, the number of luminance values 85 accounts for 50% of the 4 luminance values, and the number of luminance values 90 accounts for 25% of the 4 luminance values. Based on this example, with continued reference to fig. 5, the rectangle with abscissa 5 indicates that the number of pixels with luminance value 5 occupies 3% of the number of all pixels. If the number of all the pixels is 100, the brightness value of 3 pixels is 3. The server can acquire the brightness value which is greater than or equal to 95% of the brightness value in one video frame as the highest brightness value of the video frame, and acquire the brightness value which is less than or equal to 95% of the brightness value in one video frame as the lowest brightness value of the video frame. Referring to fig. 5, in fig. 5, the proportion of the pixel point with the luminance value of 210 is 3%, and the proportion of the pixel point with the luminance value of 230 is 2%, that is, the proportion of the pixel point with the luminance value less than 210 is 95%, then the server obtains the luminance value 210 as the highest luminance value of the video frame. In fig. 5, the proportion of the pixel with the luminance value of 8 is 2%, the proportion of the pixel with the luminance value of 5 is 3%, that is, the proportion of the pixel with the luminance value greater than 8 is 95%, and the server acquires the luminance value of 8 as the lowest luminance value of the video frame.
By obtaining the highest brightness value and the lowest brightness value in such a way, the situation that the video enhancement effect is reduced due to the abnormal high brightness value and the abnormal low brightness value in one video frame can be avoided in the subsequent video enhancement process.
In the above-described process, the server is used as an execution subject to generate the luminance information file, and the terminal acquires the luminance information file from the server. For the offline video, the terminal can also generate a brightness information file in real time in the process of playing the offline video, so that the video enhancement of the offline video is realized.
In a possible implementation manner, if the video to be decoded is an offline video, the terminal decodes the offline video to obtain a plurality of video frames in the process that the terminal plays the offline video for the first time. Before displaying a plurality of video frames, the terminal firstly counts the brightness values of a plurality of pixel points in the plurality of video frames to obtain the highest brightness value and the lowest brightness value of the plurality of pixel points in each video frame. The terminal generates a brightness information file according to the highest brightness value and the lowest brightness value of a plurality of pixel points in a plurality of video frames, wherein in the process of playing the offline video for the first time by the terminal, the brightness information file is not generated at one time, but the highest brightness value and the lowest brightness value of the plurality of pixel points in the video frames are continuously filled into the brightness information file along with the playing of the offline video, so that the brightness information file with complete offline video can be obtained after the offline video is played for the first time by the terminal, in the process of playing the offline video later, the terminal can directly perform online enhancement on the offline video based on the brightness information file, the brightness information of the offline video does not need to be generated by the terminal every time, the video enhancement efficiency is improved, and the cost of the terminal is reduced.
403. And in the process of decoding any video frame in the video to be decoded, the terminal acquires the highest brightness value and the lowest brightness value associated with the video frame from the brightness information file according to the identifier of the video frame, wherein the highest brightness value and the lowest brightness value are respectively the highest brightness value and the lowest brightness value in the brightness values of a plurality of pixel points in the video frame.
Referring to fig. 6, the left side is a plurality of video frames in the video to be decoded, the video frames carry identifiers of the video frames, and in fig. 6, the identifiers of the video frames are also "n" in the nth frame, where n is a positive integer. The right side of fig. 6 is the content in the luminance information file, in which the identities of the video frames and the highest and lowest luminance values of each video frame are stored. The terminal can obtain the highest brightness value and the lowest brightness value corresponding to the identifier n from the brightness information file according to the identifier n of the video frame. With the process that the terminal decodes the video frame from the video to be decoded, the terminal can also continuously acquire the highest brightness value and the lowest brightness value of the video frame from the brightness information file.
404. And the terminal updates the pixel values of the plurality of pixel points according to the highest brightness value and the lowest brightness value, and the brightness values indicated by the updated pixel values of the plurality of pixel points accord with the target condition.
In a possible implementation manner, the terminal respectively fuses the highest brightness value and the lowest brightness value with the brightness values of a plurality of pixel points in the video frame to obtain pixel update amplitudes corresponding to the plurality of pixel points. And the terminal updates the pixel values of the plurality of pixel points according to the pixel updating amplitude.
The foregoing embodiment will be described in two parts, the first part is a method for obtaining a pixel update amplitude by a terminal, and the second part is a method for updating a pixel value of a pixel by the terminal:
for the first part, in a possible implementation manner, the terminal obtains a plurality of first differences between the luminance values of the plurality of pixel points and the lowest luminance value. The terminal obtains a plurality of first ratios between a plurality of first differences and a second difference, wherein the second difference is a difference between the highest brightness value and the lowest brightness value. And the terminal obtains pixel update amplitudes corresponding to the plurality of pixel points according to the plurality of first ratios and the brightness values of the plurality of pixel points.
For example, the terminal obtains a plurality of first differences between the brightness values of the plurality of pixel points and the lowest brightness value. The terminal obtains a plurality of first ratios between the plurality of first differences and the second differences, and the terminal respectively carries out linear transformation on the plurality of first ratios to obtain a plurality of transformation indexes. And the terminal respectively carries out nonlinear transformation on the brightness values of the plurality of pixel points based on the plurality of transformation indexes to obtain pixel updating amplitudes corresponding to the plurality of pixel points.
The following description will take the example that the terminal executes the above processing procedure on one video frame to obtain the pixel update amplitude corresponding to one pixel point in one video frame:
the terminal converts the RGB value of a pixel point into a Y value in YUV, the Y value is also the brightness value of the pixel point, and the method for converting the RGB value into the Y value is shown in formula (1). And the terminal performs linear transformation on the brightness value of the pixel point based on the formula (2) and performs nonlinear transformation based on the formula (3) to obtain the pixel update amplitude of the pixel point.
λ=1-{[(x-min)/(max-min)]*2-1}*0.1 (2)
Δ=xλ-x (3)
Wherein, λ is a linear transformation index, x is a brightness value of a pixel point, min is a minimum brightness value of a plurality of pixel points in a video frame, max is a maximum brightness value of a plurality of pixel points in a video frame, and Δ is a pixel update amplitude of the pixel point.
The terminal can obtain the update amplitude of the pixel point according to the highest brightness value of the video frame and the lowest brightness value of the video frame through the processing of the first part. Because the highest brightness value and the lowest brightness value of different video frames are different, and the brightness values of different pixel points in the same video frame are also different, the personalized processing between different video frames can be realized, the personalized processing of different pixel points in the same video frame can also be realized, and the subsequent video enhancement effect is improved.
For the second part, in a possible implementation manner, the terminal respectively superimposes the pixel values of the plurality of pixel points with the corresponding pixel update amplitudes.
Taking the example that the terminal superposes the pixel value of one pixel point and the corresponding pixel update amplitude, the terminal can add the RGB value of one pixel point to the pixel update amplitude respectively based on formula (4) to obtain the pixel value after the pixel point is updated.
Figure BDA0002728515540000141
Wherein, R ' is the red channel value after the pixel point is updated, G ' is the green channel value after the pixel point is updated, B ' is the blue channel value after the pixel point is updated, R is the red channel value before the pixel point is updated, G is the green channel value before the pixel point is updated, B is the blue channel value before the pixel point is updated, and Δ is the pixel update amplitude of the pixel point.
For a more clear explanation of the technical solution provided in step 404 above, the following description will be made with reference to a complete example:
or, taking the example that the terminal processes one pixel point a in the video frame: if the color channel value of a pixel a in the video frame is (100, 120, 150), according to equation (1), under the conditions that a is 0.299, b is 0.587, and c is 0.144, the terminal determines that the luminance value of the pixel a is 121.94, and for convenience of calculation, the integer is 122. If the maximum brightness value of a plurality of pixel points in the video frame is 180 and the minimum brightness value is 100, the terminal can obtain the linear transformation index λ as 1.045 based on the formula (2). And the terminal obtains the pixel update amplitude delta of the pixel point A as 29.44 based on the formula (3), and obtains 29 by rounding. And the terminal obtains the color channel value (129, 149, 179) after the pixel point A is updated based on the formula (4).
Take the example that the terminal processes another pixel point B on the same video frame as follows: if the color channel value of the pixel B is (130, 160, 140), according to formula (1), under the conditions that a is 0.299, B is 0.587, and c is 0.144, the terminal determines that the luminance value of the pixel B is 152.95, and for convenience of calculation, the integer is 153. The maximum brightness value of a plurality of pixel points in the video frame is 180, the minimum brightness value is 100, and the terminal obtains 0.9675 as a linear transformation index lambda based on the formula (2). And the terminal obtains the pixel update amplitude delta of the pixel point B as-23.07 based on the formula (3), and obtains-23 by rounding. And the terminal obtains the color channel value (107, 137, 117) after the pixel point B is updated based on the formula (4).
In order to more clearly present the result after the video enhancement is performed on the pixel point a and the pixel point B, the formula (1) is used again to calculate the color channel value after the update of the pixel point a is (129, 149, 179) and the color channel value after the update of the pixel point B is (107, 137, 117), so as to obtain the updated brightness values of the pixel point a and the pixel point B.
Again, under the condition that a is 0.299, b is 0.587, and c is 0.144, an operation is performed based on the color channel value (129, 149, 179) after the update of the pixel a, and the luminance value of the pixel a after the update is 151.81, and the integer is 152. And performing operation based on the updated color channel value (107, 137, 117) of the pixel point B to obtain an updated brightness value 129.26 of the pixel point B, and rounding to 129. It can be seen that, after the video enhancement method provided by the embodiment of the present application is adopted, the brightness value of the pixel point a is increased from 122 to 152, and the brightness value of the pixel point B is decreased from 153 to 129.
Through the above description, it can be seen that, for some pixel points with lower brightness values, the brightness values of the pixel points can be improved through the video enhancement method provided by the embodiment of the application. For some pixel points with higher brightness values, the brightness values of the pixel points can be properly reduced by the video enhancement method provided by the embodiment of the application, so that the video frames can be more clearly displayed to users. The video enhancement method is extended to videos, and for videos with low average brightness values of some pixel points, the video enhancement method provided by the embodiment of the application can improve the brightness of the videos on the whole. For videos with high average brightness values of some pixel points, the brightness of the videos can be reduced on the whole.
405. And displaying a target video frame, wherein the target video frame is a video frame consisting of a plurality of pixel points after the pixel values are updated.
Referring to fig. 7, which is an effect diagram after the video enhancement method provided by the embodiment of the present application is adopted, a left side of fig. 7 is a video playing effect without video enhancement, a middle part of the diagram is a video playing effect after the video enhancement method provided by the embodiment of the present application is adopted, and a right side of the diagram is a video playing effect after the video enhancement method in the related art is adopted. As can be seen from fig. 7, the video playing effect is better by using the video enhancement method provided by the embodiment of the present application.
According to the technical scheme provided by the embodiment of the application, the terminal can update the pixel value of each pixel point based on the highest brightness value and the lowest brightness value associated with the video frame in the video decoding process, wherein the process of updating the pixel value of the pixel point is also a video enhancement process. Because the highest brightness value and the lowest brightness value associated with different video frames are different, when the video enhancement is performed on different video frames, the highest brightness value and the lowest brightness value used by the terminal are also personalized, and different video frames can obtain different degrees of video enhancement. Of course, for different videos, the video enhancement can be performed by using different maximum brightness values and minimum brightness values for different videos, so as to improve the definition of the videos.
Fig. 8 provides a schematic structural diagram of a video enhancement apparatus, and referring to fig. 8, the apparatus includes: a video acquisition module 801, a luminance value acquisition module 802, and a pixel value update module 803.
A video obtaining module 801, configured to obtain a video to be decoded.
The luminance value obtaining module 802 is configured to, in a process of decoding any video frame in a video to be decoded, obtain a highest luminance value and a lowest luminance value associated with any video frame, where the highest luminance value and the lowest luminance value are respectively a highest luminance value and a lowest luminance value of luminance values of a plurality of pixel points in any video frame.
The pixel value updating module 803 is configured to update the pixel values of the plurality of pixel points according to the highest luminance value and the lowest luminance value, where the luminance values indicated by the updated pixel values of the plurality of pixel points meet the target condition.
In a possible implementation manner, the pixel value updating module is configured to fuse the highest brightness value and the lowest brightness value with the brightness values of the plurality of pixels, respectively, to obtain pixel update amplitudes corresponding to the plurality of pixels. And updating the pixel values of the plurality of pixel points according to the pixel updating amplitude.
In a possible implementation manner, the pixel value updating module is configured to obtain a plurality of first differences between the luminance values of the plurality of pixels and the lowest luminance value. A plurality of first ratios between a plurality of first differences and a second difference are obtained, wherein the second difference is a difference between the highest brightness value and the lowest brightness value. And obtaining pixel update amplitudes corresponding to the plurality of pixel points according to the plurality of first ratios and the brightness values of the plurality of pixel points.
In a possible implementation manner, the pixel value updating module is configured to perform linear transformation on the plurality of first ratios respectively to obtain a plurality of transformation indexes. And respectively carrying out nonlinear transformation on the brightness values of the plurality of pixel points based on the plurality of transformation indexes to obtain pixel update amplitudes corresponding to the plurality of pixel points.
In a possible implementation manner, the pixel value updating module is configured to superimpose the pixel values of the plurality of pixel points with the corresponding pixel update amplitudes, respectively.
In one possible embodiment, the apparatus further comprises:
and the brightness information file acquisition module is used for acquiring a brightness information file, and the brightness information file stores the highest brightness value and the lowest brightness value which are respectively associated with a plurality of video frames in the video to be decoded.
And the video acquisition module is also used for acquiring the highest brightness value and the lowest brightness value associated with any video frame from the brightness information file according to the identifier of any video frame.
In one possible embodiment, the generating device of the brightness information file comprises:
the device comprises a pixel value acquisition module for acquiring pixel values of a plurality of pixel points in a plurality of video frames in a video to be decoded.
And the brightness value determining module is used for obtaining the brightness values of a plurality of pixel points in a plurality of video frames according to the pixel values of the plurality of pixel points in the plurality of video frames. And the brightness value acquisition module is used for acquiring the highest brightness value and the lowest brightness value which are respectively associated with the plurality of video frames according to the brightness values of the plurality of pixel points in the plurality of video frames.
And the brightness information file generating module is used for generating a brightness information file according to the identifications of the video frames and the highest brightness value and the lowest brightness value which are respectively associated with the video frames.
In one possible embodiment, the apparatus further comprises:
and the video frame display module is used for displaying a target video frame, and the target video frame is a video frame consisting of a plurality of pixel points after the pixel values are updated.
According to the technical scheme provided by the embodiment of the application, the terminal can update the pixel value of each pixel point based on the highest brightness value and the lowest brightness value associated with the video frame in the video decoding process, wherein the process of updating the pixel value of the pixel point is also a video enhancement process. Because the highest brightness value and the lowest brightness value associated with different video frames are different, when the video enhancement is performed on different video frames, the highest brightness value and the lowest brightness value used by the terminal are also personalized, and different video frames can obtain different degrees of video enhancement. Of course, for different videos, the video enhancement can be performed by using different maximum brightness values and minimum brightness values for different videos, so as to improve the definition of the videos.
An embodiment of the present application provides a computer device, configured to perform the foregoing method, where the computer device may be implemented as a terminal or a server, and a structure of the terminal is introduced below:
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 900 may be: smart phones, tablet computers, smart televisions, desktop computers, vehicle computers, portable computers, and the like.
In general, terminal 900 includes: one or more processors 901 and one or more memories 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one program code for execution by processor 901 to implement the video enhancement methods provided by the method embodiments herein.
In some embodiments, terminal 900 can also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth.
The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication.
The positioning component 908 is used to locate the current geographic Location of the terminal 900 for navigation or LBS (Location Based Service).
Power supply 909 is used to provide power to the various components in terminal 900. The power source 909 may be alternating current, direct current, disposable or rechargeable.
In some embodiments, terminal 900 can also include one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 900.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal 900, and the gyro sensor 88 may cooperate with the acceleration sensor 911 to acquire a 3D motion of the user with respect to the terminal 900.
The pressure sensor 913 may be disposed on a side bezel of the terminal 900 and/or underneath the display 905. When the pressure sensor 913 is disposed on the side frame of the terminal 900, the user's holding signal of the terminal 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at a lower layer of the display screen 905, the processor 901 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 905.
The fingerprint sensor 914 is used for collecting a fingerprint of the user, and the processor 901 identifies the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the user according to the collected fingerprint.
The optical sensor 915 is used to collect ambient light intensity. In one embodiment, the processor 901 may control the display brightness of the display screen 905 based on the ambient light intensity collected by the optical sensor 915.
The proximity sensor 916 is used to collect the distance between the user and the front face of the terminal 900.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of terminal 900, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
The computer device may also be implemented as a server, and the following describes a structure of the server:
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1000 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1001 and one or more memories 1002, where at least one program code is stored in the one or more memories 1002, and the at least one program code is loaded and executed by the one or more processors 1001 to implement the methods provided by the foregoing method embodiments. Of course, the server 1000 may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1000 may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer readable storage medium, such as a memory, including program code executable by a processor to perform the video enhancement method of the above embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer program code stored in a computer-readable storage medium, which is read by a processor of a computer device from the computer-readable storage medium, and which is executed by the processor such that the computer device performs the above-mentioned video enhancement method.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by hardware associated with program code, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic or optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of video enhancement, the method comprising:
acquiring a video to be decoded;
in the process of decoding any video frame in the video to be decoded, acquiring a highest brightness value and a lowest brightness value associated with the any video frame, wherein the highest brightness value and the lowest brightness value are respectively the highest brightness value and the lowest brightness value in the brightness values of a plurality of pixel points in the any video frame;
and updating the pixel values of the plurality of pixel points according to the highest brightness and the lowest brightness, wherein the brightness values indicated by the updated pixel values of the plurality of pixel points meet target conditions.
2. The method of claim 1, wherein the updating the pixel values of the plurality of pixels according to the highest luminance value and the lowest luminance value comprises:
respectively fusing the highest brightness value and the lowest brightness value with the brightness values of the plurality of pixel points to obtain pixel update amplitudes corresponding to the plurality of pixel points;
and updating the pixel values of the plurality of pixel points according to the pixel updating amplitude.
3. The method according to claim 2, wherein the fusing the highest luminance value and the lowest luminance value with the luminance values of the plurality of pixel points, respectively, to obtain pixel update amplitudes corresponding to the plurality of pixel points comprises:
acquiring a plurality of first differences between the brightness values of the plurality of pixel points and the lowest brightness value;
obtaining a plurality of first ratios between the plurality of first differences and a second difference, where the second difference is a difference between the highest brightness value and the lowest brightness value;
and obtaining pixel updating amplitudes corresponding to the plurality of pixel points according to the plurality of first ratios and the brightness values of the plurality of pixel points.
4. The method of claim 3, wherein obtaining pixel update magnitudes corresponding to the plurality of pixel points according to the plurality of first ratios and the brightness values of the plurality of pixel points comprises:
respectively carrying out linear transformation on the plurality of first ratios to obtain a plurality of transformation indexes;
and respectively carrying out nonlinear transformation on the brightness values of the plurality of pixel points based on the plurality of transformation indexes to obtain the pixel update amplitudes corresponding to the plurality of pixel points.
5. The method of claim 2, wherein updating the pixel values of the plurality of pixel points according to the pixel update amplitude comprises:
and overlapping the pixel values of the plurality of pixel points with the corresponding pixel updating amplitudes respectively.
6. The method of claim 1, wherein prior to obtaining a highest luminance value and a lowest luminance value associated with the any one video frame, the method further comprises:
acquiring a brightness information file, wherein the brightness information file stores a highest brightness value and a lowest brightness value which are respectively associated with a plurality of video frames in the video to be decoded;
the obtaining a highest brightness value and a lowest brightness value associated with the any video frame comprises:
and acquiring the highest brightness value and the lowest brightness value associated with any video frame from the brightness information file according to the identifier of any video frame.
7. The method according to claim 6, wherein the method for generating the brightness information file comprises:
acquiring pixel values of a plurality of pixel points in a plurality of video frames in the video to be decoded;
obtaining the brightness values of a plurality of pixel points in a plurality of video frames according to the pixel values of the plurality of pixel points in the plurality of video frames;
obtaining the highest brightness value and the lowest brightness value respectively associated with the plurality of video frames according to the brightness values of the plurality of pixel points in the plurality of video frames;
and generating the brightness information file according to the identifications of the video frames and the highest brightness value and the lowest brightness value which are respectively associated with the video frames.
8. The method of claim 1, wherein after updating the pixel values of the plurality of pixels according to the highest luminance value and the lowest luminance value, the method further comprises:
and displaying a target video frame, wherein the target video frame is a video frame formed by the plurality of pixel points after the pixel values are updated.
9. A video enhancement apparatus, characterized in that the apparatus comprises:
the video acquisition module is used for acquiring a video to be decoded;
a luminance value obtaining module, configured to obtain a highest luminance value and a lowest luminance value associated with any video frame in the to-be-decoded video in a process of decoding the any video frame, where the highest luminance value and the lowest luminance value are a highest luminance value and a lowest luminance value, respectively, of luminance values of a plurality of pixel points in the any video frame;
and the pixel value updating module is used for updating the pixel values of the plurality of pixel points according to the highest brightness value and the lowest brightness value, and the brightness values indicated by the updated pixel values of the plurality of pixel points meet target conditions.
10. The apparatus according to claim 9, wherein the pixel value updating module is configured to fuse the highest luminance value and the lowest luminance value with luminance values of the plurality of pixels, respectively, to obtain pixel update amplitudes corresponding to the plurality of pixels; and updating the pixel values of the plurality of pixel points according to the pixel updating amplitude.
11. The apparatus according to claim 10, wherein the pixel value updating module is configured to obtain a plurality of first differences between the luminance values of the plurality of pixels and the lowest luminance value; obtaining a plurality of first ratios between the plurality of first differences and a second difference, where the second difference is a difference between the highest brightness value and the lowest brightness value; and obtaining pixel updating amplitudes corresponding to the plurality of pixel points according to the plurality of first ratios and the brightness values of the plurality of pixel points.
12. The apparatus according to claim 11, wherein the pixel value updating module is configured to perform linear transformation on the first ratios respectively to obtain a plurality of transformation indexes; and respectively carrying out nonlinear transformation on the brightness values of the plurality of pixel points based on the plurality of transformation indexes to obtain the pixel update amplitudes corresponding to the plurality of pixel points.
13. The apparatus of claim 10, wherein the pixel value updating module is configured to superimpose the pixel values of the plurality of pixel points with the corresponding pixel update amplitudes respectively.
14. A computer device comprising one or more processors and one or more memories having at least one program code stored therein, the program code being loaded and executed by the one or more processors to implement the video enhancement method of any one of claims 1 to 8.
15. A computer-readable storage medium having stored therein at least one program code, the program code being loaded and executed by a processor to implement the video enhancement method of any one of claims 1 to 8.
CN202011110789.8A 2020-10-16 2020-10-16 Video enhancement method, device, equipment and storage medium Active CN112261438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011110789.8A CN112261438B (en) 2020-10-16 2020-10-16 Video enhancement method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011110789.8A CN112261438B (en) 2020-10-16 2020-10-16 Video enhancement method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112261438A true CN112261438A (en) 2021-01-22
CN112261438B CN112261438B (en) 2022-04-15

Family

ID=74244685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011110789.8A Active CN112261438B (en) 2020-10-16 2020-10-16 Video enhancement method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112261438B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2323373A1 (en) * 2008-08-07 2011-05-18 ZTE Corporation Video enhancing method and device thereof
CN104202604A (en) * 2014-08-14 2014-12-10 腾讯科技(深圳)有限公司 Video enhancing method and device
CN106412383A (en) * 2015-07-31 2017-02-15 阿里巴巴集团控股有限公司 Processing method and apparatus of video image
CN107680056A (en) * 2017-09-27 2018-02-09 深圳市华星光电半导体显示技术有限公司 A kind of image processing method and device
CN109345485A (en) * 2018-10-22 2019-02-15 北京达佳互联信息技术有限公司 A kind of image enchancing method, device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2323373A1 (en) * 2008-08-07 2011-05-18 ZTE Corporation Video enhancing method and device thereof
CN104202604A (en) * 2014-08-14 2014-12-10 腾讯科技(深圳)有限公司 Video enhancing method and device
CN106412383A (en) * 2015-07-31 2017-02-15 阿里巴巴集团控股有限公司 Processing method and apparatus of video image
CN107680056A (en) * 2017-09-27 2018-02-09 深圳市华星光电半导体显示技术有限公司 A kind of image processing method and device
CN109345485A (en) * 2018-10-22 2019-02-15 北京达佳互联信息技术有限公司 A kind of image enchancing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112261438B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US20200267416A1 (en) Image processor and image processing method
US11450044B2 (en) Creating and displaying multi-layered augemented reality
CN112714327B (en) Interaction method, device and equipment based on live application program and storage medium
EP3734979A1 (en) Video transmission method, client, and server
CN111405339B (en) Split screen display method, electronic equipment and storage medium
CN109597664A (en) Background configuration method, device, equipment and the readable medium of display area
WO2018171265A1 (en) Image filtering method and apparatus
CN113645476B (en) Picture processing method and device, electronic equipment and storage medium
CN113965813B (en) Video playing method, system, equipment and medium in live broadcasting room
CN115205164B (en) Training method of image processing model, video processing method, device and equipment
CN114630057B (en) Method and device for determining special effect video, electronic equipment and storage medium
CN108401163B (en) Method and device for realizing VR live broadcast and OTT service system
US20240062479A1 (en) Video playing method and apparatus, electronic device, and storage medium
CN112004100B (en) Driving method for integrating multiple audio and video sources into single audio and video source
CN112261438B (en) Video enhancement method, device, equipment and storage medium
CN111935509A (en) Multimedia data playing method, related device, equipment and storage medium
CN108810574B (en) Video information processing method and terminal
CN116248889A (en) Image encoding and decoding method and device and electronic equipment
CN112423108B (en) Method and device for processing code stream, first terminal, second terminal and storage medium
CN113766255A (en) Video stream merging method and device, electronic equipment and computer medium
US11863902B2 (en) Techniques for enabling high fidelity magnification of video
US12120378B2 (en) Video distribution in which provided information is superimposed on video
CN114765692B (en) Live broadcast data processing method, device, equipment and medium
CN115134496B (en) Intelligent driving control method, system, vehicle, electronic equipment and storage medium
CN118075519A (en) Light shadow show playing display method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037784

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant