WO2023020493A1 - 一种画质调节方法、装置、设备及介质 - Google Patents

一种画质调节方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023020493A1
WO2023020493A1 PCT/CN2022/112786 CN2022112786W WO2023020493A1 WO 2023020493 A1 WO2023020493 A1 WO 2023020493A1 CN 2022112786 W CN2022112786 W CN 2022112786W WO 2023020493 A1 WO2023020493 A1 WO 2023020493A1
Authority
WO
WIPO (PCT)
Prior art keywords
image quality
detection result
multimedia resource
quality enhancement
algorithm
Prior art date
Application number
PCT/CN2022/112786
Other languages
English (en)
French (fr)
Inventor
熊一能
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to EP22857804.3A priority Critical patent/EP4340374A1/en
Publication of WO2023020493A1 publication Critical patent/WO2023020493A1/zh
Priority to US18/540,532 priority patent/US20240127406A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to an interactive tool generation, device, equipment and medium.
  • the image quality can be enhanced through an image quality enhancement algorithm, but the current image quality enhancement method is relatively simple, and the image quality enhancement effect cannot meet the requirements.
  • the present disclosure provides an image quality adjustment method, device, equipment and medium.
  • An embodiment of the present disclosure provides a method for adjusting image quality, the method including:
  • multimedia resources where the multimedia resources include video or images
  • the image quality enhancement strategy includes at least one An image quality enhancement algorithm.
  • An embodiment of the present disclosure also provides an image quality adjustment device, the device includes:
  • a resource acquisition module configured to acquire multimedia resources, where the multimedia resources include video or images;
  • a scene quality module configured to determine a scene detection result and an image quality detection result corresponding to the multimedia resource, wherein the scene detection result is used to indicate a semantic result of at least one dimension of the multimedia resource, and the image quality detection The result is used to indicate the image quality of the multimedia resource;
  • An image quality enhancement module configured to determine an image quality enhancement strategy based on the scene detection result and the image quality detection result, and perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy, wherein the picture
  • the quality enhancement strategy includes at least one image quality enhancement algorithm.
  • An embodiment of the present disclosure also provides an electronic device, which includes: a processor; a memory for storing instructions executable by the processor; and the processor, for reading the instruction from the memory.
  • the instructions can be executed, and the instructions are executed to implement the image quality adjustment method provided by the embodiments of the present disclosure.
  • the embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the image quality adjustment method provided by the embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a computer program product, including a computer program/instruction, and when the computer program/instruction is executed by a processor, the image quality adjustment method provided in the embodiment of the present disclosure is implemented.
  • the technical solutions provided by the embodiments of the present disclosure have the following advantages: the image quality adjustment solution provided by the embodiments of the present disclosure can acquire multimedia resources, which include videos or images; determine the scene detection results and images corresponding to the multimedia resources Quality detection results, based on the scene detection results and image quality detection results to determine the image quality enhancement strategy, and perform image quality enhancement processing on multimedia resources according to the image quality enhancement strategy, wherein the image quality enhancement strategy includes at least one image quality enhancement algorithm.
  • the corresponding image quality enhancement strategy can be determined based on the scene and image quality of the video or image, and the image quality enhancement strategy can be used to enhance the image quality effect.
  • the information is determined, and can be composed of one or more image quality enhancement algorithms, which realizes adaptive and targeted image quality enhancement, significantly improves the effect of image quality enhancement, and thus greatly improves the user experience effect .
  • FIG. 1 is a schematic flowchart of an image quality adjustment method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another image quality adjustment method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an image quality adjustment process provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of an algorithm routing table provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an image quality adjustment device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • Image quality enhancement is a function in image or video editing tools. It usually provides adjustment capabilities in many sub-dimensions, such as saturation, contrast, sharpness, highlights, and shadows. However, certain professional knowledge is required to understand the effects of these dimensions. It is not friendly to ordinary users to represent the meaning and make appropriate adjustments. In addition, complex parameter adjustment also greatly increases the workload of editing, reduces the efficiency of users to publish videos or images, and affects the user's publishing experience.
  • an embodiment of the present disclosure provides a method for adjusting image quality, which will be introduced in conjunction with specific embodiments below.
  • Fig. 1 is a schematic flow chart of an image quality adjustment method provided by an embodiment of the present disclosure.
  • the method can be executed by an image quality adjustment device, where the device can be implemented by software and/or hardware, and generally can be integrated into an electronic device.
  • the method includes:
  • Step 101 Acquire multimedia resources, where the multimedia resources include videos or images.
  • the multimedia resource can be any video or image that needs image quality enhancement processing, and the specific file format and source are not limited.
  • the multimedia resource can be a video or image captured in real time, or a video downloaded from the Internet. or image.
  • Step 102 determine the scene detection result and the image quality detection result corresponding to the multimedia resource.
  • the scene detection result is used to indicate the semantic result of at least one dimension of the multimedia resource.
  • Scene is a kind of semantics.
  • the scene semantics to be expressed by multimedia resources can include described objects and scene categories, etc.
  • the scene detection result can be understood as the result of detecting scene semantics of one or more dimensions on multimedia resources.
  • the scene detection result may include at least one of a day and night result, a target object detection result, an exposure degree, etc., and the target object may be a human face.
  • the quality detection result is used to indicate the image quality of the multimedia resource.
  • the image quality detection result refers to the parameter detection result related to the display effect of the multimedia resource.
  • the image quality detection result in the embodiment of the present disclosure may include the degree of noise and/or the degree of blur. Necessary or unnecessary noise.
  • the image quality detection of multimedia resources is performed through a neural network-based noise recognition model; the blurring degree of multimedia resources is recognized by determining the peak signal-to-noise ratio, and the peak signal-to-noise ratio is inversely proportional to the blurring degree, that is, the peak value The higher the signal-to-noise ratio, the less blurry the multimedia assets will be.
  • determining the scene detection result corresponding to the multimedia resource may include: detecting the multimedia resource using a deep learning model of day and night classification, and determining the day and night result corresponding to the multimedia resource, where the day and night result includes day and night; and/or, through human A face recognition algorithm determines face detection results for multimedia resources.
  • the deep learning model of day and night classification can be a variety of classification models based on neural networks, for example, it can be a Support Vector Machine (Support Vector Machine, SVM) classifier, or it can be a convolutional neural network (Convolutional Neural Network) for day and night classification.
  • SVM Support Vector Machine
  • CNN convolutional neural network
  • the brightness histogram of multimedia resources can be calculated and classified by an SVM classifier, or the resolution of the multimedia resources can be adjusted and classified by a convolutional neural network, and the detection result obtained is day or night.
  • the face recognition algorithm can be any algorithm capable of face recognition, for example, the face recognition algorithm can be a face recognition convolutional neural network. Specifically, the face area of the multimedia resource is extracted through the convolutional neural network of face recognition, or the face area of the multimedia resource can be extracted and matched through the extraction and matching of preset face feature points, and the obtained face detection The result can include or not include the face area.
  • the exposure degree in the scene detection result corresponding to the multimedia resource may be determined by using an automatic exposure system (Automatic Exposure Control, AEC).
  • AEC Automatic Exposure Control
  • the exposure levels in the embodiments of the present disclosure may include underexposure, normal exposure and overexposure.
  • the face detection results and exposure levels in the scene in the embodiment of the present disclosure can correct the day and night results.
  • the multimedia resource is in the daytime, but because the indoor light is not enough to shoot, the day and night results may be misjudged as night , at this time, when the face detection result shows that there is a face area, and the exposure degree is normal exposure or overexposure, it can be explained that the day and night result is day.
  • Step 103 Determine an image quality enhancement strategy based on the scene detection result and the image quality detection result, and perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy, wherein the image quality enhancement strategy includes at least one image quality enhancement algorithm.
  • the image quality enhancement strategy may be a comprehensive solution (pipeline) for performing image quality enhancement processing on multimedia resources
  • the image quality enhancement strategy may include at least one image quality enhancement algorithm
  • the image quality enhancement algorithm may be an automatic Algorithms that detect multimedia resources and process targeted areas that need to be processed usually use deep learning algorithms.
  • the image quality enhancement strategy includes multiple image quality enhancement algorithms
  • the multiple image quality enhancement algorithms have an execution sequence, which can be determined according to actual conditions.
  • the image quality enhancement algorithm may include at least one image quality enhancement algorithm among a noise reduction algorithm, a color brightness enhancement algorithm, a skin color protection algorithm, and a sharpening algorithm.
  • the color brightness enhancement algorithm can be realized based on the deep neural network.
  • the color brightness enhancement algorithm based on the deep neural network can train the convolutional neural network by constructing a color brightness enhancement data set, and then use the trained convolutional neural network to analyze the multimedia resources. Performs color brightness enhancement.
  • the skin color protection algorithm refers to extracting the skin color range for the face area in the multimedia resource, then performing skin color detection and segmentation in the face area, and feathering and blurring the mask (mask) of the skin color area.
  • determining the image quality enhancement strategy based on the scene detection result and the image quality detection result may include: determining the corresponding image quality enhancement strategy by searching the algorithm routing table or using the algorithm branch decision tree according to the scene detection result and the image quality detection result .
  • the algorithm routing table is a routing table including multiple image quality enhancement strategies
  • the algorithm branch decision tree is a decision tree including multiple branch judgment strategies.
  • the algorithm routing table may be a routing table including multiple image quality enhancement strategies in different situations, and each image quality enhancement strategy is composed of at least one image quality enhancement algorithm.
  • the algorithm branch decision tree may be a decision tree including multiple branch judgment strategies, and each branch judgment strategy has a sequence of execution.
  • image quality enhancement strategy or, the scene detection results and image quality detection results can also be input into the algorithm branch decision tree, and branch judgments are performed one by one according to the preset execution order of multiple branch judgment strategies.
  • branch judgment strategy Both can determine the image quality enhancement algorithm corresponding to the judgment result of the current branch, and after the final judgment, an image quality enhancement strategy composed of at least one image quality enhancement algorithm can be obtained.
  • an image quality enhancement policy may be used to perform image quality enhancement processing on the multimedia resource to obtain an enhanced multimedia resource.
  • the image quality enhancement algorithm of a multimedia resource when determining the image quality enhancement algorithm of a multimedia resource, it can also be determined according to the description (meta) information of the multimedia resource.
  • the description information can be the attribute information included in the multimedia resource.
  • the multimedia resource is a video
  • the description The information can be video title or video summary etc.
  • the keywords can be extracted, and the corresponding image quality enhancement algorithm can be determined according to the mapping relationship between the keywords and the preset established keywords and image quality enhancement algorithms.
  • the image quality enhancement algorithm including at least one image quality enhancement algorithm can be described by using an execution graph composed of algorithm nodes, and the algorithm nodes can perform chain-type serial processing, branch-type parallel processing, and The combination of the above two processing methods is not specifically limited.
  • the image quality adjustment solution acquires multimedia resources, which include videos or images; determines the scene detection results and image quality detection results corresponding to the multimedia resources, and determines the image quality enhancement strategy based on the scene detection results and image quality detection results , and perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy, wherein the image quality enhancement strategy includes at least one image quality enhancement algorithm.
  • the corresponding image quality enhancement strategy can be determined based on the scene and image quality of the video or image, and the image quality enhancement strategy can be used to enhance the image quality effect.
  • the information is determined, and can be composed of one or more image quality enhancement algorithms, which realizes adaptive and targeted image quality enhancement, significantly improves the effect of image quality enhancement, and thus greatly improves the user experience effect .
  • determining the scene detection result and image quality detection result corresponding to the multimedia resource may include: extracting multiple key frames from the multimedia resource; determining the multimedia resource by detecting multiple key frames Corresponding scene detection results and image quality detection results.
  • the key frame may be one of multiple video frames included in the video, the key frame may represent a video, and the video frame may be the smallest unit constituting the video.
  • the multimedia resource is a video
  • multiple key frames in the video can be extracted first, and the scene detection and quality detection of the multimedia resource can be realized by performing scene detection and quality detection on multiple key frames, and the scene detection can be obtained Results and quality test results.
  • extracting multiple key frames from the multimedia resource may include: dividing the multimedia resource into multiple video clips, the similarity between two adjacent video clips is less than a preset threshold; extracting multiple key frames for each video clip. keyframes.
  • the key frames can be used to represent a video clip, and the key frames can be obtained by uniformly extracting the video clips, and the specific number can be determined according to the actual situation.
  • the video when the multimedia resource is a video, the video can be divided into multiple video segments with continuous scenes through transition detection.
  • the transition detection process can be to determine the similarity between two adjacent frames of the video in sequence. If it is less than the preset threshold, it means that the scenes of the current two adjacent frames have changed.
  • the video can be divided in the middle of the current two adjacent frames.
  • the two video clips after division include the current two adjacent frames respectively, so the two video
  • the similarity of the segments is also smaller than a preset threshold. Wherein, the preset threshold may be determined according to actual conditions. After the video is divided into multiple video clips, multiple key frames can be extracted for each video clip.
  • determining the scene detection result and image quality detection result corresponding to the multimedia resource through the detection of multiple key frames may include: through the scene detection and image quality detection of multiple key frames included in each video clip, A segment scene detection result and a segment quality detection result corresponding to each video segment are determined.
  • the key frames After extracting multiple key frames of each video clip, the key frames can be used as input for subsequent scene and image quality detection.
  • the scene detection results and image quality detection results of multimedia resources they can be processed in units of video clips. , that is, through scene detection and quality detection of multiple key frames included in each video segment, determine the segment scene detection result and segment quality detection result corresponding to each video segment, the specific determination method is as in the above-mentioned embodiment, No further details are given here.
  • the process of specific information aggregation may include: performing quantitative statistics on the detection results of the target dimension of multiple key frames, and determining The number of key frames corresponding to each detection result, and the detection results whose number of key frames is greater than or equal to the preset number are determined as the final detection result under the target dimension, and the preset number can be greater than or equal to half of the number of key frames ; If the number of key frames corresponding to each detection result is the same, then determine the confidence of each detection result, and determine the detection result with the highest confidence as the final detection result under the target dimension. For the above-mentioned aggregation of results, the final detection result can be determined by voting based on the classification results first. If it cannot be determined, the final detection result
  • performing image quality enhancement processing on multimedia resources includes: according to the segment image quality enhancement algorithm determined by the segment scene detection result corresponding to each video segment and the segment image quality detection result, separately for each video segment in the multimedia resource video clips for image quality enhancement.
  • the image quality enhancement process can be performed in units of video segments, that is, according to the corresponding Segment scene detection results and segment image quality detection results determine the corresponding segment image quality enhancement algorithm by searching the algorithm routing table or using the algorithm branch decision tree, and use the segment image quality enhancement algorithm to perform image quality enhancement processing on each video segment to obtain enhancement Subsequent video clips.
  • the image quality of the video can be enhanced, but also the corresponding image quality enhancement methods can be used to enhance the video clips in different scenes in the video, so that the image quality enhancement effect of the video is more accurate and more effective. Pertinence, so that the image quality effect of the enhanced video is more diverse.
  • FIG. 2 is a schematic flowchart of another image quality adjustment method provided by an embodiment of the present disclosure. On the basis of the above embodiments, this embodiment further optimizes the above image quality adjustment method. As shown in Figure 2, the method includes:
  • Step 201 acquire multimedia resources.
  • the multimedia resource includes video or image.
  • Step 202 determine the scene detection result and the image quality detection result corresponding to the multimedia resource.
  • the scene detection results include at least one of day and night results, target object detection results, and exposure levels
  • the image quality detection results include noise levels and/or blur levels.
  • determining the scene detection result corresponding to the multimedia resource includes: using a deep learning model for day and night classification to detect the multimedia resource, and determining the day and night result corresponding to the multimedia resource, where the day and night result includes day and night; and/or, using the face The recognition algorithm determines the face detection result of the multimedia resource.
  • Step 203 according to the scene detection result and the image quality detection result, determine the corresponding image quality enhancement strategy by searching the algorithm routing table or using the algorithm branch decision tree.
  • the image quality enhancement strategy includes at least one image quality enhancement algorithm.
  • the image quality enhancement algorithm includes at least one of a noise reduction algorithm, a color brightness enhancement algorithm, a skin color protection algorithm, and a sharpening algorithm.
  • the algorithm routing table is a routing table including multiple image quality enhancement strategies
  • the algorithm branch decision tree is a decision tree including multiple branch judgment strategies.
  • the algorithm routing table is a routing table including multiple image quality enhancement strategies
  • the algorithm branch decision tree is a decision tree including multiple branch judgment strategies.
  • the multimedia resource when the multimedia resource is a video, determine the scene detection result and image quality detection result corresponding to the multimedia resource, including: extracting multiple key frames from the multimedia resource; determining the scene corresponding to the multimedia resource by detecting multiple key frames Test results and image quality test results.
  • extracting multiple key frames from the multimedia resource may include: dividing the multimedia resource into multiple video clips, the similarity between two adjacent video clips is less than a preset threshold; extracting multiple key frames for each video clip. keyframes.
  • determining the scene detection result and image quality detection result corresponding to the multimedia resource by detecting multiple key frames includes: determining the scene detection and image quality detection of multiple key frames included in each video clip A fragment scene detection result and a fragment quality detection result corresponding to each video fragment.
  • Step 204 Perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy.
  • image quality enhancement processing is performed on the multimedia resource, including: according to the fragment image quality enhancement algorithm determined by the fragment scene detection result corresponding to each video fragment and the fragment quality detection result, the multimedia resource is respectively Each video clip in the resource is processed for image quality enhancement.
  • FIG. 3 is a schematic diagram of an image quality adjustment process provided by an embodiment of the present disclosure.
  • a video is taken as an example of a multimedia resource to illustrate the image quality adjustment process provided by an embodiment of the present disclosure.
  • the specific process may include: 1. Firstly, the video is divided into segments of continuous scenes through transition detection, as shown in the figure, the complete video is divided into multiple video segments. 2. For each video segment, extract several frames as input for scene and image quality detection. 3. Call the detection algorithm to detect the scene and image quality of the extracted frames respectively.
  • the detection dimensions include but not limited to day and night detection, noise detection, exposure detection, face detection and blur detection in the picture.
  • image quality enhancement scheme can be described by a graph composed of algorithm nodes.
  • the image quality enhancement scheme determined in Figure 3 may include four algorithms of noise reduction, color brightness enhancement, skin color protection, and sharpening.
  • the arrows represent the execution order, and color brightness enhancement and skin color protection can be processed in parallel. 8. Perform image quality enhancement processing on each video segment according to the image quality enhancement scheme corresponding to each video segment, and obtain an enhanced video segment.
  • FIG. 4 is a schematic diagram of an algorithmic routing table provided by an embodiment of the present disclosure.
  • an exemplary algorithmic routing table is shown, which can be established and stored in advance according to the actual When in use, after determining the scene and image quality, the corresponding image quality enhancement strategy can be determined by searching the algorithm routing table.
  • the scenes and image quality in the first column in the figure are: night scene (that is, night), underexposure, The range of the noise level is [a,b], the face is detected, and it is not blurred.
  • the corresponding image quality enhancement strategy can include the four image quality enhancement algorithms in the figure: noise reduction, color brightness enhancement, skin color protection and sharpening. Execute The sequence is shown in Figure 4, in which different algorithms can be used to represent circles with different attributes, for example, circles with different gray scales or different filling colors can be used to represent them.
  • the video is divided into continuous segments, and scene detection and image quality detection are performed on each continuous scene segment to obtain its scene and image quality information; then according to the scene and image quality information, based on A picture quality enhancement scheme composed of multiple algorithms is generated by means of routing tables or decision trees, and each video segment is enhanced.
  • the image quality adjustment solution provided by the embodiments of the present disclosure acquires multimedia resources, and the multimedia resources include video or images; determines the scene detection results and image quality detection results corresponding to the multimedia resources; or use the algorithm branch decision tree to determine the corresponding image quality enhancement strategy; adopt the image quality enhancement strategy to perform image quality enhancement processing on multimedia resources.
  • the corresponding image quality enhancement strategy can be determined based on the scene and image quality of the video or image, and the image quality enhancement strategy can be used to enhance the image quality effect.
  • the information is determined, and can be composed of one or more image quality enhancement algorithms, which realizes adaptive and targeted image quality enhancement, significantly improves the effect of image quality enhancement, and thus greatly improves the user experience effect .
  • FIG. 5 is a schematic structural diagram of an image quality adjustment device provided by an embodiment of the present disclosure.
  • the device may be implemented by software and/or hardware, and may generally be integrated into an electronic device. As shown in Figure 5, the device includes:
  • a resource acquisition module 301 configured to acquire multimedia resources, where the multimedia resources include video or images;
  • the scene quality module 302 is configured to determine a scene detection result and an image quality detection result corresponding to the multimedia resource, wherein the scene detection result is used to indicate a semantic result of at least one dimension of the multimedia resource, and the image quality The detection result is used to indicate the image quality of the multimedia resource;
  • An image quality enhancement module 303 configured to determine an image quality enhancement strategy based on the scene detection result and the image quality detection result, and perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy, wherein the The image quality enhancement strategy includes at least one image quality enhancement algorithm.
  • the scene detection results include at least one of day and night results, target object detection results, and exposure levels
  • the image quality detection results include noise levels and/or blur levels.
  • the scene quality module 302 is specifically used for:
  • a corresponding image quality enhancement strategy is determined by searching an algorithm routing table or using an algorithm branch decision tree.
  • the algorithm routing table is a routing table including multiple image quality enhancement strategies
  • the algorithm branch decision tree is a decision tree including multiple branch judgment strategies.
  • the image quality enhancement algorithms of the multiple image quality enhancement algorithms have an execution sequence.
  • the scene quality module 302 includes:
  • a frame extraction unit configured to extract multiple key frames from the multimedia resource
  • the detecting unit is configured to determine a scene detection result and an image quality detection result corresponding to the multimedia resource by detecting the plurality of key frames.
  • the frame extraction unit is specifically used for:
  • a plurality of key frames are extracted for each of the video clips.
  • the detection unit is used for:
  • the image quality enhancement module 303 is specifically used for:
  • the image quality enhancement processing is performed on each of the video segments in the multimedia resources.
  • the image quality enhancement algorithm includes at least one of a noise reduction algorithm, a color brightness enhancement algorithm, a skin color protection algorithm, and a sharpening algorithm.
  • the image quality adjustment device provided by the embodiments of the present disclosure can execute the image quality adjustment method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • An embodiment of the present disclosure further provides a computer program product, including a computer program/instruction, and when the computer program/instruction is executed by a processor, the image quality adjustment method provided in any embodiment of the present disclosure is implemented.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transferred from a website, computer, server, or data center by wire (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a digital video disc (digital video disc, DVD)), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.
  • a magnetic medium such as a floppy disk, a hard disk, or a magnetic tape
  • an optical medium such as a digital video disc (digital video disc, DVD)
  • a semiconductor medium such as a solid state disk (solid state disk, SSD)
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. Referring specifically to FIG. 6 , it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and stationary terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 400 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 401, which may be randomly accessed according to a program stored in a read-only memory (ROM) 402 or loaded from a storage device 408.
  • ROM read-only memory
  • RAM random access memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 403 .
  • RAM random access memory
  • various programs and data necessary for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • the following devices can be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 407 such as a computer; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication means 409 may allow the electronic device 400 to perform wireless or wired communication with other devices to exchange data. While FIG. 6 shows electronic device 400 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 409, or from storage means 408, or from ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the image quality adjustment method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium Communications (eg, communication networks) are interconnected.
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires a multimedia resource, and the multimedia resource includes video or image; determines that the multimedia resource Corresponding scene detection results and image quality detection results, wherein the scene detection results are used to indicate the semantic results of at least one dimension of the multimedia resource, and the image quality detection results are used to indicate the image quality of the multimedia resource ; Determine an image quality enhancement strategy based on the scene detection result and the image quality detection result, and perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy, wherein the image quality enhancement strategy includes at least An image quality enhancement algorithm.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides a method for adjusting image quality, including:
  • multimedia resources where the multimedia resources include video or images
  • the image quality enhancement strategy includes at least one An image quality enhancement algorithm.
  • the scene detection results include at least one of day and night results, target object detection results, and exposure levels
  • the image quality detection results include Noise level and/or blurriness
  • determining an image quality enhancement strategy based on the scene detection result and the image quality detection result includes:
  • a corresponding image quality enhancement strategy is determined by searching an algorithm routing table or using an algorithm branch decision tree.
  • the algorithm routing table is a routing table including multiple image quality enhancement strategies, and the algorithm branch decision tree includes multiple branch judgments Policy decision tree.
  • the image quality enhancement strategy when the image quality enhancement strategy includes multiple image quality enhancement algorithms and image quality enhancement algorithms, the multiple image quality enhancement algorithms
  • the image quality enhancement algorithm has an execution sequence.
  • determining the scene detection result and the image quality detection result corresponding to the multimedia resource includes:
  • extracting multiple key frames from the multimedia resource includes:
  • a plurality of key frames are extracted for each of the video clips.
  • determining the scene detection result and the image quality detection result corresponding to the multimedia resource by detecting the multiple key frames includes:
  • performing image quality enhancement processing on the multimedia resources includes:
  • the image quality enhancement algorithm includes at least one of a noise reduction algorithm, a color brightness enhancement algorithm, a skin color protection algorithm, and a sharpening algorithm.
  • an image quality adjustment device including:
  • a resource acquisition module configured to acquire multimedia resources, where the multimedia resources include video or images;
  • a scene quality module configured to determine a scene detection result and an image quality detection result corresponding to the multimedia resource, wherein the scene detection result is used to indicate a semantic result of at least one dimension of the multimedia resource, and the image quality detection The result is used to indicate the image quality of the multimedia resource;
  • An image quality enhancement module configured to determine an image quality enhancement strategy based on the scene detection result and the image quality detection result, and perform image quality enhancement processing on the multimedia resource according to the image quality enhancement strategy, wherein the picture quality
  • the quality enhancement strategy includes at least one image quality enhancement algorithm.
  • the scene detection results include at least one of day and night results, target object detection results, and exposure levels
  • the image quality detection results include Noise level and/or blurriness
  • the scene image quality module is specifically used for:
  • a corresponding image quality enhancement strategy is determined by searching an algorithm routing table or using an algorithm branch decision tree.
  • the algorithm routing table is a routing table including multiple image quality enhancement strategies, and the algorithm branch decision tree includes multiple branch judgments Policy decision tree.
  • the image quality enhancement algorithm when the image quality enhancement strategy includes multiple image quality enhancement algorithms, the multiple image quality enhancement algorithms
  • the image quality enhancement algorithm has an execution sequence.
  • the scene image quality module when the multimedia resource is a video, the scene image quality module includes:
  • a frame extraction unit configured to extract multiple key frames from the multimedia resource
  • the detecting unit is configured to determine a scene detection result and an image quality detection result corresponding to the multimedia resource by detecting the plurality of key frames.
  • the frame extraction unit is specifically configured to:
  • a plurality of key frames are extracted for each of the video clips.
  • the detection unit is used for:
  • the image quality enhancement module is specifically used for:
  • the image quality enhancement algorithm includes at least one of a noise reduction algorithm, a color brightness enhancement algorithm, a skin color protection algorithm, and a sharpening algorithm.
  • the present disclosure provides an electronic device, including:
  • the processor is configured to read the executable instruction from the memory, and execute the instruction to implement any one of the image quality adjustment methods provided in the present disclosure.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute any one of the picture provided by the present disclosure. quality adjustment method.
  • the present disclosure provides a computer program product, including computer programs/instructions, when the computer program/instructions are executed by a processor, the image quality as described in any one provided by the present disclosure is realized. Adjustment method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

本公开实施例涉及一种画质调节方法、装置、设备及介质,其中该方法包括:获取多媒体资源,多媒体资源包括视频或图像;确定多媒体资源对应的场景检测结果和画质检测结果,其中,场景检测结果用于指示多媒体资源的至少一个维度的语义结果,画质检测结果用于指示多媒体资源的图像画质;基于场景检测结果和画质检测结果确定画质增强策略,并按照画质增强策略对多媒体资源进行画质增强处理,其中,画质增强策略中包括至少一种画质增强算法。本公开实施例,实现了自适应的并且有针对性的画质增强,显著提升了画质增强的效果,进而大幅提升了用户的体验效果。

Description

一种画质调节方法、装置、设备及介质
相关申请的交叉引用
本申请要求于2021年08月18日提交的,申请号为202110950397.0、发明名称为“一种画质调节方法、装置、设备及介质”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种交互工具生成、装置、设备及介质。
背景技术
随着互联网技术和电子设备的不断发展,用户对图像或视频的画质要求越来越高。
为了提升用户体验可以通过画质增强算法进行画质增强,但是目前的画质增强方式较为简单,画质增强效果不能满足要求。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种画质调节方法、装置、设备及介质。
本公开实施例提供了一种画质调节方法,所述方法包括:
获取多媒体资源,所述多媒体资源包括视频或图像;
确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
本公开实施例还提供了一种画质调节装置,所述装置包括:
资源获取模块,用于获取多媒体资源,所述多媒体资源包括视频或图像;
场景画质模块,用于确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
画质增强模块,用于基于所述场景检测结果和所述画质检测结果确定画质增强策略, 并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的画质调节方法。
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的画质调节方法。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现如本公开实施例提供的画质调节方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:本公开实施例提供的画质调节方案,获取多媒体资源,多媒体资源包括视频或图像;确定多媒体资源对应的场景检测结果和画质检测结果,基于场景检测结果和画质检测结果确定画质增强策略,并按照画质增强策略对多媒体资源进行画质增强处理,其中,画质增强策略中包括至少一种画质增强算法。采用上述技术方案,基于视频或图像的场景和画质可以确定对应的画质增强策略,并采用该画质增强策略进行画质效果的增强,由于画质增强策略根据场景和画质两个维度的信息确定,并且可以由一个或多个画质增强算法组合而成,实现了自适应的并且有针对性的画质增强,显著提升了画质增强的效果,进而大幅提升了用户的体验效果。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种画质调节方法的流程示意图;
图2为本公开实施例提供的另一种画质调节方法的流程示意图;
图3为本公开实施例提供的一种画质调节过程的示意图;
图4为本公开实施例提供的一种算法路由表的示意图;
图5为本公开实施例提供的一种画质调节装置的结构示意图;
图6为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里 阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
画质增强是图像或视频的编辑工具中的功能,通常提供很多细分维度的调整能力,例如饱和度、对比度、清晰度、高亮和阴影等,但是需要一定的专业知识才能理解这些维度所代表的意义并进行合适的调参,对于普通用户并不友好。另外,复杂的调参也大幅增加了编辑的工作量,降低用户发布视频或图像的效率,进而影响用户的发布体验。
为了在画质增强的同时降低用户调参的工作量,目前出现了一些自动化的画质增强算法,如限制对比度自适应直方图均衡(Contrast Limited Adaptive Histogram Equalization,CLAHE)算法、锐化(Unsharp Mask,USM)算法等,但是目前的画质增强算法较为简单,通常是对单个维度的较为简单的自动增强,比如对比度增强、锐化、降噪等;或者,固定的若干个自动增强算法。但是在实际场景中,可能存在多种不同的画质问题,上述方式较为简单并不能得到一个很好的结果,画质增强效果不能满足要求。为了解决上述问题,本公开实施例提供了一种画质调节方法,下面结合具体的实施例对该方法进行介绍。
图1为本公开实施例提供的一种画质调节方法的流程示意图,该方法可以由画质调节装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法包括:
步骤101、获取多媒体资源,多媒体资源包括视频或图像。
其中,多媒体资源可以为任意一个需要进行画质增强处理的视频或图像,具体文件格式和来源不限,例如多媒体资源可以为实时拍摄得到的视频或图像,也可以为从互联网上 下载得到的视频或图像。
步骤102、确定多媒体资源对应的场景检测结果和画质检测结果。
其中,场景检测结果用于指示多媒体资源的至少一个维度的语义结果。场景是语义的一种,多媒体资源所要表达的场景语义可以包括描述的对象和场景类别等,场景检测结果可以理解为对多媒体资源进行一个或多个维度的场景语义的检测得到的结果。本公开实施例中场景检测结果可以包括昼夜结果、目标对象的检测结果、曝光程度等中的至少一个,目标对象可以为人脸。
画质检测结果用于指示多媒体资源的图像画质。画质检测结果是指对多媒体资源进行与显示效果相关的参数检测结果,本公开实施例中画质检测结果可以包括噪声程度和/或模糊程度等,噪声是指存在于图像或视频中的不必要的或多余的干扰信息。
本公开实施例中,获取多媒体资源之后,可以调用多个维度的检测算法对多媒体资源进行场景检测和画质检测,进而确定对应的场景检测结果和画质检测结果。其中,画质检测结果中噪声程度和模糊程度的确定方式可以为多种,本本公开实施例中对此不作限定。示例性的,通过基于神经网络的噪声识别模型进行多媒体资源的画质检测;通过确定峰值信噪比的方式进行多媒体资源的模糊程度的识别,峰值信噪比与模糊程度成反比,也即峰值信噪比越高,多媒体资源的模糊程度越低。
可选的,确定多媒体资源对应的场景检测结果,可以包括:采用昼夜分类的深度学习模型对多媒体资源进行检测,确定多媒体资源对应的昼夜结果,昼夜结果包括白天和夜晚;和/或,通过人脸识别算法确定多媒体资源的人脸检测结果。
其中,昼夜分类的深度学习模型可以为多种基于神经网络的分类模型,例如可以为支持向量机(Support Vector Machine,SVM)分类器,也可以为用于进行昼夜分类的卷积神经网络(Convolutional Neural Networks,CNN),具体可以根据实际情况确定。具体的,针对多媒体资源可以统计其亮度直方图,并通过SVM分类器进行分类,或者,将多媒体资源的分辨率调整之后通过卷积神经网络进行分类,得到的检测结果为白天或夜晚。
人脸识别算法可以为任意一个能够进行人脸识别的算法,例如人脸识别算法可以为一个人脸识别的卷积神经网络。具体的,通过人脸识别的卷积神经网络对多媒体资源提取其中的人脸区域,或者,可以通过预设的人脸特征点的提取和匹配提取多媒体资源的人脸区域,得到的人脸检测结果可以为包括人脸区域或不包括人脸区域。
可选的,多媒体资源对应的场景检测结果中的曝光程度可以采用自动曝光系统(Automatic Exposure Control,AEC)进行确定。本公开实施例中的曝光程度可以包括欠曝光、正常曝光及过曝光。
可选的,本公开实施例中场景中的人脸检测结果和曝光程度可以对昼夜结果进行校正, 例如多媒体资源为白天,但是由于在室内光线不充足的地方拍摄,昼夜结果可能误判为夜晚,此时当人脸检测结果为存在人脸区域,曝光程度为正常曝光或过曝光,则可以说明昼夜结果为白天。
步骤103、基于场景检测结果和画质检测结果确定画质增强策略,并按照画质增强策略对多媒体资源进行画质增强处理,其中,画质增强策略中包括至少一种画质增强算法。
其中,画质增强策略可以为用于对多媒体资源进行画质增强处理的综合性解决方案(pipeline),该画质增强策略中可以包括至少一个画质增强算法,画质增强算法可以为能够自动对多媒体资源进行检测并有针对性的对需要进行处理的部分区域进行处理的算法,通常采用深度学习算法。当画质增强策略中包括多个画质增强算法时,多个画质增强算法具有执行先后顺序,上述执行先后顺序可以根据实际情况确定。
本公开实施例中,画质增强算法可以包括降噪算法、色彩亮度增强算法、肤色保护算法和锐化算法等中的至少一个画质增强算法。其中,色彩亮度增强算法可以基于深度神经网络实现,基于深度神经网络的色彩亮度增强算法可以通过构建色彩亮度增强数据集对卷积神经网络进行训练,之后采用训练好的卷积神经网络对多媒体资源进行色彩亮度增强。肤色保护算法是指针对多媒体资源中的人脸区域提取肤色范围,然后在人脸区域内进行肤色检测和分割,并对肤色区域的掩膜(mask)进行羽化模糊。
可选的,基于场景检测结果和画质检测结果确定画质增强策略,可以包括:根据场景检测结果和画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略。其中,算法路由表为包括多个画质增强策略的路由表,算法分支决策树为包括多个分支判断策略的决策树。
算法路由表可以为包括多个不同情况下的画质增强策略的路由表,每个画质增强策略均由至少一个画质增强算法组合而成。算法分支决策树可以为包括多个分支判断策略的决策树,各个分支判断策略具有先后执行顺序。
具体的,确定多媒体资源对应的场景检测结果和画质检测结果之后,可以根据场景检测结果和画质检测结果通过在算法路由表中进行查找,确定对应的由至少一个画质增强算法组合而成的画质增强策略;或者,还可以将场景检测结果和画质检测结果输入算法分支决策树中,按照多个分支判断策略的预设先后执行顺序逐一进行分支判断,在每个分支判断策略之后均可以确定当前分支判断结果对应的画质增强算法,最终判断结束之后,可以得到由至少一个画质增强算法组合而成的画质增强策略。之后,可以采用画质增强策略对多媒体资源进行画质增强处理,得到增强后的多媒体资源。
可选的,在确定多媒体资源的画质增强算法时,还可以根据多媒体资源的描述(meta)信息进行确定,描述信息可以为该多媒体资源所包括的属性信息,例如多媒体资源为视频 时,描述信息可以为视频标题或视频摘要等。根据多媒体资源的描述信息可以提取其中的关键字,根据关键字与预设建立的关键字与画质增强算法之间的映射关系确定对应的画质增强算法。
可选的,确定包括至少一个画质增强算法的画质增强算法之后,可以采用算法节点构成的执行图进行描述,并且其中的算法节点可以进行链条式的串行处理、分支式的并行处理以及上述两种处理方式的组合,具体不限。
本公开实施例提供的画质调节方案,获取多媒体资源,多媒体资源包括视频或图像;确定多媒体资源对应的场景检测结果和画质检测结果,基于场景检测结果和画质检测结果确定画质增强策略,并按照画质增强策略对多媒体资源进行画质增强处理,其中,画质增强策略中包括至少一种画质增强算法。采用上述技术方案,基于视频或图像的场景和画质可以确定对应的画质增强策略,并采用该画质增强策略进行画质效果的增强,由于画质增强策略根据场景和画质两个维度的信息确定,并且可以由一个或多个画质增强算法组合而成,实现了自适应的并且有针对性的画质增强,显著提升了画质增强的效果,进而大幅提升了用户的体验效果。
在一些实施例中,当多媒体资源为视频,确定多媒体资源对应的场景检测结果和画质检测结果,可以包括:在多媒体资源中提取多个关键帧;通过对多个关键帧的检测确定多媒体资源对应的场景检测结果和画质检测结果。
其中,关键帧可以为视频中包括的多个视频帧中的一个,关键帧能够表征一段视频,视频帧可以为构成视频的最小单位。当多媒体资源为视频时,可以先提取该视频中的多个关键帧,通过对多个关键帧分别进行场景检测和画质检测,可以实现对多媒体资源的场景检测和画质检测,得到场景检测结果和画质检测结果。
可选的,在多媒体资源中提取多个关键帧,可以包括:将多媒体资源划分为多个视频片段,相邻两个视频片段的相似度小于预设阈值;针对每个视频片段提取其中的多个关键帧。关键帧可以用于表征一个视频片段,关键帧可以通过对视频片段进行均匀抽取得到,具体数量可以根据实际情况确定。
具体的,当多媒体资源为视频时,可以先通过转场检测将视频划分为多个具有连续场景的视频片段,转场检测过程可以为依次确定视频的相邻两帧的相似度,当相似度小于预设阈值则说明当前相邻两帧的场景出现变换,可以当前相邻两帧中间为分界线对视频进行划分,划分之后的两个视频片段分别包括当前相邻两帧,因此两个视频片段的相似度也小于预设阈值。其中,预设阈值可以根据实际情况确定。将视频划分为多个视频片段之后,针对每个视频片段可以提取其中的多个关键帧。
可选的,通过对多个关键帧的检测确定多媒体资源对应的场景检测结果和画质检测结 果,可以包括:通过对每个视频片段中包括的多个关键帧的场景检测和画质检测,确定每个视频片段对应的片段场景检测结果和片段画质检测结果。
在上述提取每个视频片段的多个关键帧之后,可以将关键帧作为后续场景和画质检测的输入,当确定多媒体资源的场景检测结果和画质检测结果时,可以视频片段为单位进行处理,也即通过对每个视频片段中包括的多个关键帧的场景检测和画质检测,确定每个视频片段对应的片段场景检测结果和片段画质检测结果,具体确定方式如上述实施例,在此不进行赘述。
由于每个视频片段中的关键帧的数量为多个,可以将多个关键帧对应的片段场景检测结果和片段画质检测结果进行信息聚合,确定每个视频片段的场景和画质。以一个视频片段下的片段场景检测结果和片段画质检测结果中的一个维度的检测结果为例,具体信息聚合的过程可以包括:对多个关键帧的目标维度的检测结果进行数量统计,确定各检测结果对应的关键帧的数量,将其中关键帧的数量大于或等于预设数量的检测结果,确定为目标维度下最终确定的检测结果,预设数量可以大于或等于关键帧的数量的一半;如果各检测结果对应的关键帧的数量相同,则确定各检测结果的置信度,将置信度最高的检测结果确定为目标维度下最终确定的检测结果。上述对结果的聚合,可以先根据分类结果投票确定最终检测结果,如果不能确定,则可以根据数值结果继续判断确定最终的检测结果。
在一些实施例中,对多媒体资源进行画质增强处理,包括:根据每个视频片段对应的片段场景检测结果和片段画质检测结果所确定的片段画质增强算法,分别对多媒体资源中的每个视频片段进行画质增强处理。
针对上述划分为多个视频片段的多媒体资源,确定每个视频片段的片段场景检测结果和片段画质检测结果之后,可以视频片段为单位进行画质增强处理,也即根据每个视频片段对应的片段场景检测结果和片段画质检测结果通过查找算法路由表或者采用算法分支决策树确定对应的片段画质增强算法,并采用片段画质增强算法分别对各视频片段进行画质增强处理,得到增强后的各视频片段。
上述方案中,不仅可以对视频实现画质增强,而且还能够针对视频中的各不同场景下的视频片段分别采用对应的画质增强方式进行增强,使得视频的画质增强效果更加准确并且更加具有针对性,进而使增强后视频的画质效果更加多样化。
图2为本公开实施例提供的另一种画质调节方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述画质调节方法。如图2所示,该方法包括:
步骤201、获取多媒体资源。
其中,多媒体资源包括视频或图像。
步骤202、确定多媒体资源对应的场景检测结果和画质检测结果。
其中,场景检测结果包括昼夜结果、目标对象的检测结果、曝光程度中的至少一个,画质检测结果包括噪声程度和/或模糊程度。
可选的,确定多媒体资源对应的场景检测结果,包括:采用昼夜分类的深度学习模型对多媒体资源进行检测,确定多媒体资源对应的昼夜结果,昼夜结果包括白天和夜晚;和/或,通过人脸识别算法确定多媒体资源的人脸检测结果。
步骤203、根据场景检测结果和画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略。
其中,画质增强策略中包括至少一种画质增强算法。可选的,画质增强算法包括降噪算法、色彩亮度增强算法、肤色保护算法和锐化算法中的至少一个。算法路由表为包括多个画质增强策略的路由表,算法分支决策树为包括多个分支判断策略的决策树。
可选的,算法路由表为包括多个画质增强策略的路由表,算法分支决策树为包括多个分支判断策略的决策树。
可选的,当多媒体资源为视频,确定多媒体资源对应的场景检测结果和画质检测结果,包括:在多媒体资源中提取多个关键帧;通过对多个关键帧的检测确定多媒体资源对应的场景检测结果和画质检测结果。
可选的,在多媒体资源中提取多个关键帧,可以包括:将多媒体资源划分为多个视频片段,相邻两个视频片段的相似度小于预设阈值;针对每个视频片段提取其中的多个关键帧。可选的,通过对多个关键帧的检测确定多媒体资源对应的场景检测结果和画质检测结果,包括:通过对每个视频片段中包括的多个关键帧的场景检测和画质检测,确定每个视频片段对应的片段场景检测结果和片段画质检测结果。
步骤204、按照画质增强策略对多媒体资源进行画质增强处理。
可选的,当多媒体资源为视频,对多媒体资源进行画质增强处理,包括:根据每个视频片段对应的片段场景检测结果和片段画质检测结果所确定的片段画质增强算法,分别对多媒体资源中的每个视频片段进行画质增强处理。
示例性的,图3为本公开实施例提供的一种画质调节过程的示意图,图3中以多媒体资源为视频为例,表示本公开实施例提供的画质调节过程。如图3所示,具体过程可以包括:1、首先通过转场检测将视频划分成连续场景的片段,如图中完整的视频被划分成多个视频片段。2、针对每个视频片段,抽取若干帧用作场景及画质检测的输入。3、调用检测算法对抽取的帧分别进行场景和画质的检测,检测维度包括但不限于图中的昼夜检测、噪声检测、曝光检测、人脸检测和模糊检测,其中,昼夜结果、曝光程度和人脸检测属于场景检测结果,噪声程度和模糊程度属于画质检测结果。4、将多个帧的场景和画质检测结果 进行聚合,得到每个视频片段的场景和画质。5、根据各视频片段的场景、画质和视频携带的描述(meta)信息,可以生成视频对应的画质增强方案(pipeline),画质增强方案可以由多个具有先后执行顺序的处理算法组合而成。6、具体的方式可以包括:a、以场景、画质和描述信息为条件项进行依次算法路;b、以场景、画质和描述信息为条件项的算法分支决策树。7、画质增强方案可以用算法节点构成的图进行描述,各个算法允许链条式的串行处理、分支式的并行处理以及上述两种处理方案的组合。如图3中确定的画质增强方案中可以包括降噪、色彩亮度增强、肤色保护和锐化四个算法,箭头表征了执行顺序,色彩亮度增强和肤色保护可以并行处理。8、根据各视频片段对应的画质增强方案分别对各视频片段进行画质增强处理,得到增强后的视频片段。
示例性的,图4为本公开实施例提供的一种算法路由表的示意图,如图4所示,展示了一个示例性的算法路由表,该算法路由表可以预先根据建立并存储,在实际使用时,确定场景和画质之后,通过查找该算法路由表,可以确定对应的画质增强策略,如图中第一列中的场景和画质分别为:夜景(即夜晚)、欠曝、噪声程度的范围为[a,b]、检测到人脸、不模糊,对应的画质增强策略可以包括图中的降噪、色彩亮度增强、肤色保护和锐化四个画质增强算法,执行顺序如图4中所示,图中通过不同算法可以采用属性不同的圆形来表示,例如可以采用灰度不同或填充颜色不同的圆形来表示。
上述方案中,当多媒体资源为视频时,通过对视频进行连续片段划分,对每个连续场景片段进行场景检测和画质检测,得到其场景和画质信息;之后根据场景和画质信息,基于路由表或决策树等方式生成由多个算法组合而成的画质增强方案,并对各视频片段进行增强。
本方案中,针对未知场景和画质的视频或图像,提出了一种基于场景和画质的增强方案,通过对视频或图像进行检测,分析存在的画质问题,然后再自动选择合适的综合解决方案和算法参数,通过自动组合多种不同的算法实现自适应的、有针对性的画质增强,避免了单个增强算法或固定算法流程无法适应所有场景而造成的准确定低的问题,从而显著提升画质增强的效果,大幅降低用户编辑的工作量。
本公开实施例提供的画质调节方案,获取多媒体资源,多媒体资源包括视频或图像;确定多媒体资源对应的场景检测结果和画质检测结果;根据场景检测结果和画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略;采按照画质增强策略对多媒体资源进行画质增强处理。采用上述技术方案,基于视频或图像的场景和画质可以确定对应的画质增强策略,并采用该画质增强策略进行画质效果的增强,由于画质增强策略根据场景和画质两个维度的信息确定,并且可以由一个或多个画质增强算法组合而成,实现了自适应的并且有针对性的画质增强,显著提升了画质增强的效果,进而大幅提 升了用户的体验效果。
图5为本公开实施例提供的一种画质调节装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图5所示,该装置包括:
资源获取模块301,用于获取多媒体资源,所述多媒体资源包括视频或图像;
场景画质模块302,用于确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
画质增强模块303,用于基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
可选的,所述场景检测结果包括昼夜结果、目标对象的检测结果、曝光程度中的至少一个,所述画质检测结果包括噪声程度和/或模糊程度。
可选的,所述场景画质模块302具体用于:
根据所述场景检测结果和所述画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略。
可选的,所述算法路由表为包括多个画质增强策略的路由表,所述算法分支决策树为包括多个分支判断策略的决策树。
可选的,当所述画质增强策略中包括多个画质增强算法画质增强算法时,所述多个画质增强算法画质增强算法具有执行先后顺序。
可选的,当所述多媒体资源为视频,所述场景画质模块302包括:
帧提取单元,用于在所述多媒体资源中提取多个关键帧;
检测单元,用于通过对所述多个关键帧的检测确定所述多媒体资源对应的场景检测结果和画质检测结果。
可选的,所述帧提取单元具体用于:
将所述多媒体资源划分为多个视频片段,相邻两个视频片段的相似度小于预设阈值;
针对每个所述视频片段提取其中的多个关键帧。
可选的,所述检测单元用于:
通过对每个所述视频片段中包括的所述多个关键帧的场景检测和画质检测,确定每个所述视频片段对应的片段场景检测结果和片段画质检测结果。
可选的,所述画质增强模块303具体用于:
根据每个所述视频片段对应的片段场景检测结果和片段画质检测结果所确定的片段画 质增强算法,分别对所述多媒体资源中的每个所述视频片段进行画质增强处理。
可选的,所述画质增强算法包括降噪算法、色彩亮度增强算法、肤色保护算法和锐化算法中的至少一个。
本公开实施例所提供的画质调节装置可执行本公开任意实施例所提供的画质调节方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的画质调节方法。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
图6为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图6,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄 像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的画质调节方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配 入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取多媒体资源,所述多媒体资源包括视频或图像;确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令 执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种画质调节方法,包括:
获取多媒体资源,所述多媒体资源包括视频或图像;
确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,所述场景检测结果包括昼夜结果、目标对象的检测结果、曝光程度中的至少一个,所述画质检测结果包括噪声程度和/或模糊程度。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,基于所述场景检测结果和所述画质检测结果确定画质增强策略,包括:
根据所述场景检测结果和所述画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,所述算法路由表为包括多个画质增强策略的路由表,所述算法分支决策树为包括多个分支判断策略的决策树。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,当所述画质增强策略中包括多个画质增强算法画质增强算法时,所述多个画质增强算法画质增强算法具有执行先后顺序。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,当所述多媒体资源为视频,确定所述多媒体资源对应的场景检测结果和画质检测结果,包括:
在所述多媒体资源中提取多个关键帧;
通过对所述多个关键帧的检测确定所述多媒体资源对应的场景检测结果和画质检测结 果。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,在所述多媒体资源中提取多个关键帧,包括:
将所述多媒体资源划分为多个视频片段,相邻两个视频片段的相似度小于预设阈值;
针对每个所述视频片段提取其中的多个关键帧。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,通过对所述多个关键帧的检测确定所述多媒体资源对应的场景检测结果和画质检测结果,包括:
通过对每个所述视频片段中包括的所述多个关键帧的场景检测和画质检测,确定每个所述视频片段对应的片段场景检测结果和片段画质检测结果。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,对所述多媒体资源进行画质增强处理,包括:
根据每个所述视频片段对应的片段场景检测结果和片段画质检测结果所确定的片段画质增强算法,分别对所述多媒体资源中的每个所述视频片段进行画质增强处理。
根据本公开的一个或多个实施例,本公开提供的画质调节方法中,所述画质增强算法包括降噪算法、色彩亮度增强算法、肤色保护算法和锐化算法中的至少一个。
根据本公开的一个或多个实施例,本公开提供了一种画质调节装置,包括:
资源获取模块,用于获取多媒体资源,所述多媒体资源包括视频或图像;
场景画质模块,用于确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
画质增强模块,用于基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述场景检测结果包括昼夜结果、目标对象的检测结果、曝光程度中的至少一个,所述画质检测结果包括噪声程度和/或模糊程度。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述场景画质模块具体用于:
根据所述场景检测结果和所述画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述算法路由表为 包括多个画质增强策略的路由表,所述算法分支决策树为包括多个分支判断策略的决策树。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,当所述画质增强策略中包括多个画质增强算法画质增强算法时,所述多个画质增强算法画质增强算法具有执行先后顺序。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,当所述多媒体资源为视频,所述场景画质模块包括:
帧提取单元,用于在所述多媒体资源中提取多个关键帧;
检测单元,用于通过对所述多个关键帧的检测确定所述多媒体资源对应的场景检测结果和画质检测结果。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述帧提取单元具体用于:
将所述多媒体资源划分为多个视频片段,相邻两个视频片段的相似度小于预设阈值;
针对每个所述视频片段提取其中的多个关键帧。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述检测单元用于:
通过对每个所述视频片段中包括的所述多个关键帧的场景检测和画质检测,确定每个所述视频片段对应的片段场景检测结果和片段画质检测结果。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述画质增强模块具体用于:
根据每个所述视频片段对应的片段场景检测结果和片段画质检测结果所确定的片段画质增强算法,分别对所述多媒体资源中的每个所述视频片段进行画质增强处理。
根据本公开的一个或多个实施例,本公开提供的画质调节装置中,所述画质增强算法包括降噪算法、色彩亮度增强算法、肤色保护算法和锐化算法中的至少一个。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的画质调节方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的画质调节方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机程序产品,包括计算机程 序/指令,该计算机程序/指令被处理器执行时实现如本公开提供的任一所述的画质调节方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种画质调节方法,其特征在于,包括:
    获取多媒体资源,所述多媒体资源包括视频或图像;
    确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
    基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
  2. 根据权利要求1所述的方法,其特征在于,所述场景检测结果包括昼夜结果、目标对象的检测结果、曝光程度中的至少一个,所述画质检测结果包括噪声程度和/或模糊程度。
  3. 根据权利要求1所述的方法,其特征在于,基于所述场景检测结果和所述画质检测结果确定画质增强策略,包括:
    根据所述场景检测结果和所述画质检测结果,通过查找算法路由表或者采用算法分支决策树确定对应的画质增强策略。
  4. 根据权利要求3所述的方法,其特征在于,所述算法路由表为包括多个画质增强策略的路由表,所述算法分支决策树为包括多个分支判断策略的决策树。
  5. 根据权利要求1所述的方法,其特征在于,当所述画质增强策略中包括多个画质增强算法画质增强算法时,所述多个画质增强算法画质增强算法具有执行先后顺序。
  6. 根据权利要求1所述的方法,其特征在于,当所述多媒体资源为视频,确定所述多媒体资源对应的场景检测结果和画质检测结果,包括:
    在所述多媒体资源中提取多个关键帧;
    通过对所述多个关键帧的检测确定所述多媒体资源对应的场景检测结果和画质检测结果。
  7. 根据权利要求6所述的方法,其特征在于,在所述多媒体资源中提取多个关键帧,包括:
    将所述多媒体资源划分为多个视频片段,相邻两个视频片段的相似度小于预设阈值;
    针对每个所述视频片段提取其中的多个关键帧。
  8. 根据权利要求7所述的方法,其特征在于,通过对所述多个关键帧的检测确定所述多媒体资源对应的场景检测结果和画质检测结果,包括:
    通过对每个所述视频片段中包括的所述多个关键帧的场景检测和画质检测,确定每个所述视频片段对应的片段场景检测结果和片段画质检测结果。
  9. 根据权利要求6所述的方法,其特征在于,对所述多媒体资源进行画质增强处理, 包括:
    根据每个所述视频片段对应的片段场景检测结果和片段画质检测结果所确定的片段画质增强算法,分别对所述多媒体资源中的每个所述视频片段进行画质增强处理。
  10. 根据权利要求1所述的方法,其特征在于,所述画质增强算法包括降噪算法、色彩亮度增强算法、肤色保护算法和锐化算法中的至少一个。
  11. 一种画质调节装置,其特征在于,包括:
    资源获取模块,用于获取多媒体资源,所述多媒体资源包括视频或图像;
    场景画质模块,用于确定所述多媒体资源对应的场景检测结果和画质检测结果,其中,所述场景检测结果用于指示所述多媒体资源的至少一个维度的语义结果,所述画质检测结果用于指示所述多媒体资源的图像画质;
    画质增强模块,用于基于所述场景检测结果和所述画质检测结果确定画质增强策略,并按照所述画质增强策略对所述多媒体资源进行画质增强处理,其中,所述画质增强策略中包括至少一种画质增强算法。
  12. 一种电子设备,其特征在于,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-10中任一所述的画质调节方法。
  13. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-10中任一所述的画质调节方法。
  14. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如权利要求1-10中任一项所述的画质调节方法。
PCT/CN2022/112786 2021-08-18 2022-08-16 一种画质调节方法、装置、设备及介质 WO2023020493A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22857804.3A EP4340374A1 (en) 2021-08-18 2022-08-16 Picture quality adjustment method and apparatus, and device and medium
US18/540,532 US20240127406A1 (en) 2021-08-18 2023-12-14 Image quality adjustment method and apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110950397.0A CN115914765A (zh) 2021-08-18 2021-08-18 一种画质调节方法、装置、设备及介质
CN202110950397.0 2021-08-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/540,532 Continuation US20240127406A1 (en) 2021-08-18 2023-12-14 Image quality adjustment method and apparatus, device, and medium

Publications (1)

Publication Number Publication Date
WO2023020493A1 true WO2023020493A1 (zh) 2023-02-23

Family

ID=85240082

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112786 WO2023020493A1 (zh) 2021-08-18 2022-08-16 一种画质调节方法、装置、设备及介质

Country Status (4)

Country Link
US (1) US20240127406A1 (zh)
EP (1) EP4340374A1 (zh)
CN (1) CN115914765A (zh)
WO (1) WO2023020493A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011166315A (ja) * 2010-02-05 2011-08-25 Sharp Corp 表示装置、表示装置の制御方法、プログラム及び記録媒体
JP2012049841A (ja) * 2010-08-27 2012-03-08 Casio Comput Co Ltd 撮像装置およびプログラム
CN107846625A (zh) * 2017-10-30 2018-03-27 广东欧珀移动通信有限公司 视频画质调整方法、装置、终端设备及存储介质
CN110738611A (zh) * 2019-09-20 2020-01-31 网宿科技股份有限公司 一种视频画质增强方法、系统及设备
CN110781740A (zh) * 2019-09-20 2020-02-11 网宿科技股份有限公司 一种视频画质识别方法、系统及设备
CN110933490A (zh) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 一种画质和音质的自动调整方法、智能电视机及存储介质
CN111031346A (zh) * 2019-10-28 2020-04-17 网宿科技股份有限公司 一种增强视频画质的方法和装置
CN111163349A (zh) * 2020-02-20 2020-05-15 腾讯科技(深圳)有限公司 一种画质参数调校方法、装置、设备及可读存储介质
CN113014992A (zh) * 2021-03-09 2021-06-22 四川长虹电器股份有限公司 智能电视的画质切换方法及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011166315A (ja) * 2010-02-05 2011-08-25 Sharp Corp 表示装置、表示装置の制御方法、プログラム及び記録媒体
JP2012049841A (ja) * 2010-08-27 2012-03-08 Casio Comput Co Ltd 撮像装置およびプログラム
CN107846625A (zh) * 2017-10-30 2018-03-27 广东欧珀移动通信有限公司 视频画质调整方法、装置、终端设备及存储介质
CN110738611A (zh) * 2019-09-20 2020-01-31 网宿科技股份有限公司 一种视频画质增强方法、系统及设备
CN110781740A (zh) * 2019-09-20 2020-02-11 网宿科技股份有限公司 一种视频画质识别方法、系统及设备
CN111031346A (zh) * 2019-10-28 2020-04-17 网宿科技股份有限公司 一种增强视频画质的方法和装置
CN110933490A (zh) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 一种画质和音质的自动调整方法、智能电视机及存储介质
CN111163349A (zh) * 2020-02-20 2020-05-15 腾讯科技(深圳)有限公司 一种画质参数调校方法、装置、设备及可读存储介质
CN113014992A (zh) * 2021-03-09 2021-06-22 四川长虹电器股份有限公司 智能电视的画质切换方法及装置

Also Published As

Publication number Publication date
CN115914765A (zh) 2023-04-04
US20240127406A1 (en) 2024-04-18
EP4340374A1 (en) 2024-03-20

Similar Documents

Publication Publication Date Title
CN109308490B (zh) 用于生成信息的方法和装置
CN111651636B (zh) 视频相似片段搜索方法及装置
CN111522996B (zh) 视频片段的检索方法和装置
CN109961032B (zh) 用于生成分类模型的方法和装置
CN110991373A (zh) 图像处理方法、装置、电子设备及介质
CN111784712B (zh) 图像处理方法、装置、设备和计算机可读介质
CN110349161B (zh) 图像分割方法、装置、电子设备、及存储介质
CN111246287A (zh) 视频处理方法、发布方法、推送方法及其装置
WO2023125750A1 (zh) 一种图像去噪方法、装置和存储介质
CN113784171A (zh) 视频数据处理方法、装置、计算机系统及可读存储介质
CN113033677A (zh) 视频分类方法、装置、电子设备和存储介质
WO2023274005A1 (zh) 图像处理方法、装置、电子设备和存储介质
CN113610034A (zh) 识别视频中人物实体的方法、装置、存储介质及电子设备
CN110852250B (zh) 一种基于最大面积法的车辆排重方法、装置及存储介质
CN113158773A (zh) 一种活体检测模型的训练方法及训练装置
WO2023088029A1 (zh) 一种封面生成方法、装置、设备及介质
WO2023020493A1 (zh) 一种画质调节方法、装置、设备及介质
CN110765304A (zh) 图像处理方法、装置、电子设备及计算机可读介质
US20220108427A1 (en) Method and an electronic device for detecting and removing artifacts/degradations in media
CN115830362A (zh) 图像处理方法、装置、设备、介质及产品
CN115546554A (zh) 敏感图像的识别方法、装置、设备和计算机可读存储介质
CN113705386A (zh) 视频分类方法、装置、可读介质和电子设备
CN113409199A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN112418233A (zh) 图像处理方法、装置、可读介质及电子设备
CN112312200A (zh) 视频封面生成方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22857804

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022857804

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023577180

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022857804

Country of ref document: EP

Effective date: 20231212

NENP Non-entry into the national phase

Ref country code: DE