WO2022057789A1 - 视频清晰度识别方法、电子设备及存储介质 - Google Patents

视频清晰度识别方法、电子设备及存储介质 Download PDF

Info

Publication number
WO2022057789A1
WO2022057789A1 PCT/CN2021/118231 CN2021118231W WO2022057789A1 WO 2022057789 A1 WO2022057789 A1 WO 2022057789A1 CN 2021118231 W CN2021118231 W CN 2021118231W WO 2022057789 A1 WO2022057789 A1 WO 2022057789A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
preset
definition
index value
video
Prior art date
Application number
PCT/CN2021/118231
Other languages
English (en)
French (fr)
Inventor
崔英林
Original Assignee
上海连尚网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海连尚网络科技有限公司 filed Critical 上海连尚网络科技有限公司
Publication of WO2022057789A1 publication Critical patent/WO2022057789A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the present application relates to video processing technologies, and in particular, to a video definition identification method, an electronic device, and a computer-readable storage medium.
  • Video manufacturers and users will generate a large number of videos with rich and diverse content every day, covering movies, TV series, animation, variety shows, life, music, etc. These videos will be uploaded to various video websites and self-media platforms for users to watch. Due to the influence of video shooting equipment, shooting technology, etc., the quality of video produced by different video manufacturers and users is different, especially the videos shot by users in daily life, which are affected by camera performance, shooting stability, shooting technology, etc., resulting in clear video. poor quality, which affects the video quality.
  • Various aspects of the present application provide a video definition identification method, an electronic device, and a computer-readable storage medium for identifying the definition of a video.
  • a method for identifying video clarity comprising:
  • the resolution of the frame image is obtained
  • the sharpness of the video to be identified is determined.
  • Another aspect of the present application provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method provided by the above aspect.
  • Another aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method provided in the above-mentioned aspect.
  • the resolution of the frame image can be obtained for each frame image of the multiple frame images in the video to be recognized, and the resolution of the frame image can be calculated based on the preset definition recognition algorithm.
  • the definition index value of the frame image and then, using a preset compression method, compress the frame image to a preset low-quality standard to obtain a compressed frame image, and calculate the compressed frame based on a preset clarity recognition algorithm.
  • the sharpness index value of the image determines the sharpness of the frame image, and then , and determine the definition of the video to be identified based on the definition of the plurality of frame images. Therefore, based on the resolution of the frame images in the video, the embodiments of the present application use the definition index values before and after the compression of the frame images to realize the identification of the video definition, and it is possible to identify whether the definition of any video meets the requirements.
  • the resolution of the frame image in the video is used to determine the definition of the video based on the definition index value before and after the video is compressed, so as to realize the unified standard identification of different video definitions, and can Objectively compare the sharpness of different videos, making the measurement of video sharpness more objective and unified.
  • the videos of different definition can be screened based on the unified standard, so that when recommending the video to the user, only the video with the required definition can be recommended to the user. , improve user viewing experience and save user traffic.
  • FIG. 1 is a schematic flowchart of a method for identifying video clarity provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a specific example of determining the definition of the frame image in an embodiment of the application
  • FIG. 3 is a diagram of a specific application example of determining whether the definition of a frame image satisfies a preset definition standard in the embodiment shown in FIG. 2;
  • FIG. 4 is a schematic flowchart of a method for identifying video clarity provided by another embodiment of the present application.
  • FIG. 5 is a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present application.
  • terminals involved in the embodiments of the present application may include but are not limited to mobile phones, personal digital assistants (Personal Digital Assistants, PDAs), wireless handheld devices, tablet computers (Tablet Computers), and personal computers (Personal Computers, PCs). ), MP3 players, MP4 players, wearable devices (eg, smart glasses, smart watches, smart bracelets, etc.), etc.
  • FIG. 1 is a schematic flowchart of a method for identifying video sharpness provided by an embodiment of the present application, as shown in FIG. 1 .
  • the multiple frame images may be all frame images in the video to be recognized, that is, for each frame image in the video to be recognized, perform 101 to 105 to determine the definition; or it may be from the video to be recognized according to certain rules
  • For the extracted multiple frame images for example, a method of extracting one frame image every several frames of images, or a method of randomly extracting multiple frames of images, extracts images from the video to be recognized to obtain the multiple frame images, respectively for Perform 101-105 for each frame of image to determine the sharpness.
  • This embodiment of the present application does not limit whether the plurality of frame images are all frame images in the video to be identified, the specific number, and the extraction method.
  • the video to be identified in this embodiment of the present application may be a video encoded using any video encoding standard and any format, for example, a video obtained by encoding original video data based on the commonly used H.264/AVC video encoding standard.
  • the embodiments of the present application do not limit the encoding standard and encoding format of the video to be identified.
  • the preset sharpness identification algorithm in this embodiment of the present application may be any algorithm that can calculate sharpness, for example, it may include but not limited to the following algorithms: edge detection algorithm (Canny algorithm), Laplas algorithm, gradient algorithm The evaluation function (Brenner), the gradient function (Tenengrad), etc., the embodiments of the present disclosure do not limit the specific sharpness identification algorithm used.
  • the pixels in the frame image may be calculated and processed by using a preset sharpness identification algorithm, and an average value obtained by calculating and processing for the pixels in the frame image may be used as the sharpness index value.
  • Using a preset compression method compress the frame image to a preset low quality standard to obtain a compressed frame image.
  • the definition of the video to be identified may be based on the average of the definition of the plurality of frame images.
  • a certain threshold for example, 10%
  • execution bodies of 101 to 106 may be applications located in the terminal, or may also be functional units such as plug-ins or software development kits (Software Development Kit, SDK) in the application setting the terminal, or It may also be an application located in a network-side server (for example, a video website, a self-media platform), which is not particularly limited in this embodiment of the present application.
  • a network-side server for example, a video website, a self-media platform
  • the application may be a local program (nativeApp) installed on the terminal or the network side server, or may also be a web page program (webApp) of the browser on the terminal or the network side server. This is not limited.
  • the definition index value of the frame image before and after compression is used to realize the identification of the video definition, and it is possible to identify whether the definition of any video meets the requirements.
  • the resolution of the frame image in the video is used to determine the definition of the video based on the definition index value before and after the video is compressed, so as to realize the unified standard identification of different video definitions, and can Objectively compare the sharpness of different videos, making the measurement of video sharpness more objective and unified.
  • the videos of different definition can be screened based on the unified standard, so that when recommending the video to the user, only the video with the required definition can be recommended to the user. , improve user viewing experience and save user traffic.
  • the resolution range of the frame image may also be determined based on the resolution of the frame image.
  • the definition of the frame image may be determined based on the resolution range of the frame image, at least based on the definition index value of the frame image and the definition index value of the compressed frame image .
  • the resolution of the video image is the number of pixels (Pixels Per Inch, PPI) contained in a unit inch. Resolution affects the image size and is proportional to the image size. Under the condition of a certain bit rate, the higher the resolution, the larger the image; the lower the resolution, the smaller the image. In the case of a certain bit rate, the resolution is inversely proportional to the sharpness. The higher the resolution, the less clear the image is, and the lower the resolution, the clearer the image.
  • the definition of the frame image can be determined by a corresponding calculation method according to the resolution range of the frame image, which improves the efficiency and accuracy of obtaining the definition of the frame image.
  • FIG. 2 is a schematic flowchart of a specific example of determining the definition of the frame image according to an embodiment of the present application. As shown in FIG. 2 , on the basis of the embodiment shown in FIG. 1 , based on the resolution range of the frame image, at least based on the sharpness index value of the frame image and the sharpness index value of the compressed frame image , to determine the definition of the frame image, which can be achieved in the following ways:
  • the resolution of the frame image is greater than the first preset resolution (for example, 1000PPI), execute 201-202; if the resolution of the frame image is greater than the second preset resolution (eg 700PPI) and not greater than the first preset resolution (eg 1000PPI), go to 203 to 205, wherein the second preset resolution is smaller than the first preset resolution; if the resolution of the frame image is greater than the third resolution If the preset resolution (480PPI) is not greater than the second preset resolution (eg, 700PPI), go to 206 to 207, wherein the third preset resolution is smaller than the second preset resolution.
  • the first preset resolution for example, 1000PPI
  • the second preset resolution eg 700PPI
  • the first preset resolution eg 1000PPI
  • the sharpness index value of the frame image obtained through operation 102 is expressed as the average value b of the canny algorithm
  • the operation 104 is used to calculate the
  • the definition index value of the compressed frame image is expressed as the average value a of the canny algorithm
  • a first preset change rate for example, 0.05
  • the change rate of the sharpness index value when the change rate of the sharpness index value is greater than the first preset change rate, it may be determined that the sharpness of the frame image satisfies the preset sharpness standard; otherwise, when the sharpness index value changes When the rate of change is not greater than the first preset change rate, it is determined that the definition of the frame image does not meet the preset definition standard.
  • the code rate is the size of the data encoded by the encoder per second, and the unit is kbps.
  • 800kbps means that the encoder generates 800kb (or 100KB) of data per second.
  • the bit rate is proportional to the definition. The higher the bit rate, the clearer the image; the lower the bit rate, the less clear the image.
  • the comprehensive change rate when the comprehensive change rate is greater than the second preset change rate, it may be determined that the definition of the frame image satisfies the preset definition standard; otherwise, when the comprehensive change rate is not greater than the second preset change rate When the rate is determined, it is determined that the definition of the frame image does not meet the preset definition standard.
  • 201, 203 and 206 are respectively operations performed based on the resolution range of the frame image.
  • a method for determining the sharpness of a frame image when different resolutions are located in different resolution ranges is provided, and a unified standard can be provided for the frame images in each resolution range, so as to quickly and accurately determine the definition of the frame image. Whether the sharpness meets the preset sharpness standard improves the efficiency and accuracy of determining whether the sharpness of the frame image meets the requirements.
  • the comprehensive change rate may be obtained by performing weighted calculation on the code rate change rate and the change rate of the definition index value.
  • the weight of the definition index value is greater than the weight of the code rate change rate.
  • the rate of change of the definition index value is usually smaller than the rate of change of the code rate
  • the difference between the rate of change of the code rate and the rate of change of the clarity index is calculated by adopting a method in which the weight of the definition index value is greater than the weight of the rate of change of the code rate.
  • the change rate is weighted and calculated, and the obtained comprehensive change rate can more objectively and accurately reflect the change rate of the frame image before and after compression. Based on the comprehensive change rate, it is helpful to more objectively and accurately determine whether the clarity of the frame image meets the preset clarity. degree standard.
  • the code rate of the frame image is greater than a preset code rate (for example, 650kbps). If the code rate of the frame image is greater than the preset code rate, then based on whether the change rate of the clarity index value is greater than the third preset change rate (0.05), it is determined whether the clarity of the frame image satisfies the preset clarity Specifically, if the change rate of the sharpness index value is greater than the third preset change rate (0.05), it is determined that the sharpness of the frame image satisfies the preset sharpness standard; otherwise, it is determined that the frame image The definition does not meet the preset definition standard.
  • a preset code rate for example, 650kbps.
  • the bit rate of the frame image is not greater than the preset bit rate (for example, 650kbps), based on whether the change rate of the definition index value is greater than the fourth preset change rate (0.1), determine the clarity of the frame image Whether the resolution satisfies the preset sharpness standard, specifically, if the change rate of the sharpness index value is greater than the fourth preset change rate (0.1), it is determined that the sharpness of the frame image meets the preset sharpness standard; Otherwise, it is determined that the definition of the frame image does not meet the preset definition standard.
  • the fourth preset change rate is greater than the third preset change rate.
  • the frame image when the resolution of the frame image is greater than the third preset resolution (480PPI) and not greater than the second preset resolution (eg 700PPI), if the bit rate of the frame image is greater than the preset bit rate (eg 650kbps), the frame image is relatively clear, at this time, only based on the bit rate of the frame image and the change rate of the clarity index value can determine that the clarity of the frame image meets the preset clarity standard, which improves the clarity of the frame image. Determine efficiency.
  • the third preset resolution 480PPI
  • the second preset resolution eg 700PPI
  • determining the resolution it may also include: if the resolution of the frame image is not greater than the fourth preset resolution (480PPI), it may be directly determined that the resolution of the video to be identified does not meet the preset resolution standard, thereby improving the resolution identification efficiency. Wherein, the fourth preset resolution is smaller than the third preset resolution.
  • the fourth preset resolution is smaller than the third preset resolution.
  • FIG. 3 it is a diagram of a specific application example of determining whether the definition of a frame image satisfies a preset definition standard in the embodiment shown in FIG. 2 .
  • a certain threshold for example, 10%
  • a video compression algorithm (FFmpeg) may be specifically used, and the value of the constant quality coding parameter (CRF) is set to a value greater than 28 and not greater than 51, and the The frame image is compressed to obtain the compressed frame image.
  • CMF constant quality coding parameter
  • CRF is a parameter of the constant quality encoding method.
  • each frame image of the same type can be compressed with the same size, that is, throw away relatively the same amount of information, that is The same quantization parameter (QP) is used.
  • QP quantization parameter
  • the quantization parameter QP defines how much information is dropped from a macroblock of pixels.
  • CRF ranges from 0 to 51, where 0 is lossless mode. The larger the value, the worse the image quality and the smaller the generated file. Among them, 18 to 28 is a reasonable range. 18 is considered visually lossless, and its output video quality is comparable to the input video.
  • the CRF is greater than 28
  • the image begins to suffer from visual loss.
  • a value of CRF greater than 28 and not greater than 51 is used to compress the frame image, and the obtained compressed frame image is a visually lossy image.
  • the definition index before and after the frame image compression can be combined
  • the quality (ie, sharpness) of the frame image is determined by the change rate of the value.
  • the change rate of the sharpness index value before and after the frame image is compressed the higher the quality of the original frame image (ie, the clearer).
  • the smaller the change rate of the sharpness index value before and after the compression of the frame image the lower the quality of the original frame image (that is, the more blurred). If the change rate of the definition index value before and after the frame image compression is small, that is, less than a certain preset value (corresponding to the above-mentioned first preset change rate, second preset change rate, third preset change rate, fourth preset change rate preset rate of change), then it can be basically determined that the original frame image is also blurred.
  • the CRF value is 38 to compress the frame image.
  • the obtained compressed frame image basically has a visual sense. Blur state, and then compare the change rate of the clarity index value of the original video and the compressed video real frame image, if the change rate is small, then it can basically be determined that the original video is also blurred.
  • a preset compression method may be used to compress the to-be-identified video to a preset low-quality standard to obtain a compressed video, where the compressed video includes each frame.
  • the image after image compression includes the compressed image corresponding to the frame image.
  • FFmpeg may be used, and the value of the constant quality encoding parameter CRF is set to a value greater than 28 and not greater than 51, and the video to be identified is compressed to a preset low quality standard, Get the compressed video.
  • FIG. 4 is a schematic flowchart of a video definition identification method provided by another embodiment of the present application, as shown in FIG. 4 .
  • the video to be identified in this embodiment of the present application may be a video encoded using any video encoding standard and any format, for example, a video obtained by encoding original video data based on the commonly used H.264/AVC video encoding standard.
  • the embodiments of the present application do not limit the encoding standard and encoding format of the video to be identified.
  • bit rate of each frame image in the video to be recognized is the same, and the bit rate of the video to be recognized can be obtained directly from the encoder as the bit rate of each frame image therein.
  • operations 402 and 403 are not limited in the execution order, and they may be executed simultaneously, or in any order, or with any time difference, which is not limited in this embodiment of the present application.
  • Using a preset compression method compress the to-be-identified video to a preset low-quality standard to obtain a compressed video.
  • the compressed video includes the compressed image corresponding to the frame image.
  • operations 405 and 406 do not have a limit on the execution order, and they may be executed simultaneously, or in any order, or with any time difference, which is not limited in this embodiment of the present application.
  • the videos of different definition can be screened based on the unified standard, which helps to save the precious and limited video websites and self-media platforms. storage resources and maintenance resources, and improve resource utilization.
  • the definition of the video meets the preset definition standard, and the videos of different definition can be screened based on the unified standard. Recommend videos with the required definition to improve user viewing experience and save user traffic.
  • the to-be-identified video can be discarded directly, thereby saving storage resources and maintenance resources. If the definition of the to-be-identified video meets a preset definition standard, the to-be-identified video may be stored for further recommendation to the user.
  • the technical solutions of the present application can be applied to applications in any device such as terminals, video servers (eg, video websites, self-media platforms), for example, any video data processing applications and playback applications.
  • video data processing applications and playback applications to perform the video definition identification method provided by the embodiments of the present application, based on the resolution of frame images in the video, the definition index values before and after frame image compression are used to realize the clarity of the video. It can identify whether the definition of any video meets the requirements.
  • the resolution of the frame image in the video is used to determine the definition of the video based on the definition index value before and after the video is compressed, so as to realize the unified standard identification of different video definitions, and can Objectively compare the sharpness of different videos, making the measurement of video sharpness more objective and unified.
  • the videos of different definition can be screened based on the unified standard, so that when recommending the video to the user, only the video with the required definition can be recommended to the user. , improve user viewing experience and save user traffic.
  • Figure 5 shows a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present application.
  • the computer system/server 12 shown in FIG. 5 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • computer system/server 12 takes the form of a general-purpose computing device.
  • Components of computer system/server 12 may include, but are not limited to, one or more processors or processing units 16, storage or system memory 28, and a bus 18 connecting various system components including system memory 28 and processing unit 16.
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • Computer system/server 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by computer system/server 12, including both volatile and non-volatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32 .
  • Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive”).
  • a disk drive may be provided for reading and writing to removable non-volatile magnetic disks (eg "floppy disks"), as well as removable non-volatile optical disks (eg CD-ROM, DVD-ROM) or other optical media) to read and write optical drives.
  • each drive may be connected to bus 18 through one or more data media interfaces.
  • System memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to perform the functions of various embodiments of the present application.
  • a program/utility 40 having a set (at least one) of program modules 42, which may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other Program modules and program data, each or some combination of these examples may include an implementation of a network environment.
  • Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
  • the computer system/server 12 may also communicate with one or more external devices 14 (eg, keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable a user to interact with the computer system/server 12, and/or with any device (eg, network card, modem, etc.) that enables the computer system/server 12 to communicate with one or more other computing devices. Such communication may take place through input/output (I/O) interface 44 . Also, the computer system/server 12 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 20 . As shown in FIG.
  • LAN local area network
  • WAN wide area network
  • public network such as the Internet
  • network adapter 20 communicates with other modules of computer system/server 12 via bus 18 .
  • bus 18 It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with computer system/server 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, Tape drives and data backup storage systems, etc.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28 , for example, implements the method provided by any of the embodiments corresponding to FIG. 1 to FIG. 4 .
  • Another embodiment of the present application further provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method provided by any of the embodiments corresponding to FIG. 1 to FIG. 4 . .
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer readable program code embodied thereon. Such propagated data signals may take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional A programming language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider to via Internet connection).
  • LAN local area network
  • WAN wide area network
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or page components may be combined. Either it can be integrated into another system, or some features can be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated units implemented in the form of software functional units can be stored in a computer-readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute parts of the methods described in the various embodiments of the present application step.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种视频清晰度识别方法、电子设备及计算机可读存储介质。通过分别针对待识别视频中多个帧图像中的每个帧图像,获取所述帧图像的分辨率,并基于预设清晰度识别算法计算所述帧图像的清晰度指标值,然后,采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像,并基于预设清晰度识别算法计算所述压缩后帧图像的清晰度指标值,然后,基于所述帧图像的分辨率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度,进而,基于所述多个帧图像的清晰度,确定所述待识别视频的清晰度,由此,实现了对视频清晰度的识别,可以识别任一视频的清晰度是否满足要求。

Description

视频清晰度识别方法、电子设备及存储介质
本申请是以CN申请号为202010982081.5,申请日为2020.09.17的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本申请涉及视频处理技术,尤其涉及一种视频清晰度识别方法、电子设备及计算机可读存储介质。
背景技术
目前,随着音视频技术和自媒体技术的发展,各种视频网站、自媒体平台不断涌现。视频制造商、用户每天会产生海量内容丰富多元的各种视频,涵盖电影、电视剧、动漫、综艺、生活、音乐等,这些视频会上传各种视频网站、自媒体平台,供用户观看。由于视频拍摄设备、拍摄技术等影响,不同视频制造商、用户产生的视频质量不同,尤其是用户在日常生活中拍摄的视频,受摄像头性能、拍摄稳定性、拍摄技术等影响,会导致视频清晰度较差,从而影响了视频质量。
由于视频的数据量较大,会占用大量的存储资源和维护资源,如果对视频清晰度不加识别而存储所有上传的视频,会浪费视频网站、自媒体平台宝贵而有限的存储资源和维护资源;另外,如果对视频清晰度不加识别,在向用户推荐视频时,很容易将清晰度较低的视频推荐给用户,影响用户观看体验、并且浪费了用户流量,导致用户体验较差。
现有技术中,只能识别图片或同一图片上不同区域的清晰度,而无法识别视频、尤其是不同视频的清晰度,因此,无法基于统一标准对不同清晰度的视频进行筛选。
发明内容
本申请的多个方面提供一种视频清晰度识别方法、电子设备及计算机可读存储介质,用以识别视频的清晰度。
本申请的一方面,提供一种视频清晰度识别方法,包括:
分别针对待识别视频中多个帧图像中的每个帧图像,获取所述帧图像的分辨率;
基于预设清晰度识别算法,计算所述帧图像的清晰度指标值;
采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像;
基于所述预设清晰度识别算法,计算所述压缩后帧图像的清晰度指标值;
基于所述帧图像的分辨率、所述帧图像的码率和所述压缩后帧图像的码率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度;
基于所述多个帧图像的清晰度,确定所述待识别视频的清晰度。
本申请的另一方面,提供一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如上述一方面所提供的方法。
本申请的另一方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如上述一方面所提供的方法。
由上述技术方案可知,在本申请实施例中,可以分别针对待识别视频中多个帧图像中的每个帧图像,获取所述帧图像的分辨率,并基于预设清晰度识别算法计算所述帧图像的清晰度指标值,然后,采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像,并基于预设清晰度识别算法计算所述压缩后帧图像的清晰度指标值,然后,基于所述帧图像的分辨率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度,进而,基于所述多个帧图像的清晰度,确定所述待识别视频的清晰度。由此,本申请实施例基于视频中帧图像的分辨率,采用帧图像压缩前后的清晰度指标值,实现了对视频清晰度的识别,可以识别任一视频的清晰度是否满足要求。
另外,采用本申请所提供的技术方案,基于视频中帧图像的分辨率,基于视频压缩前后的清晰度指标值,来确定视频的清晰度,实现了对不同视频清晰度的统一标准识别,可以客观比较不同视频的清晰度,使得对视频清晰度的衡量更客观、统一。
另外,采用本申请所提供的技术方案,确定视频的清晰度,可以基于统一标准对不同清晰度的视频进行筛选,有助于节省视频网站、自媒体平台宝贵而有限的存储资源和维护资源,提高资源利用率。
另外,采用本申请所提供的技术方案,确定视频的清晰度,可以基于统一标准对不 同清晰度的视频进行筛选,这样,在向用户推荐视频时,可以仅向用户推荐清晰度满足要求的视频,提高用户观看体验、节省用户流量。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为本申请一实施例提供的视频清晰度识别方法的流程示意图;
图2为本申请一实施例中确定所述帧图像的清晰度一个具体示例的流程示意图;
图3为图2所示实施例中确定帧图像的清晰度是否满足预设清晰度标准的一个具体应用示例图;
图4为本申请另一实施例提供的视频清晰度识别方法的流程示意图;
图5为适于用来实现本申请实施方式的示例性计算机系统/服务器12的框图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的全部其它实施例,都属于本申请保护的范围。
需要说明的是,本申请实施例中所涉及的终端可以包括但不限于手机、个人数字助理(Personal Digital Assistant,PDA)、无线手持设备、平板电脑(Tablet Computer)、个人电脑(Personal Computer,PC)、MP3播放器、MP4播放器、可穿戴设备(例如,智能眼镜、智能手表、智能手环等)等。
另外,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
如背景技术中所述,现有技术中,只能识别图片或同一图片上不同区域的清晰度,而无法识别视频、尤其是不同视频的清晰度,因此,无法基于统一标准对不同 清晰度的视频进行筛选。
因此,亟需提供一种对视频清晰度进行统一识别的标准和方法,以提高视频中物品信息的显示效果,以识别视频清晰度是否满足要求。
图1为本申请一实施例提供的视频清晰度识别方法的流程示意图,如图1所示。
101、分别针对待识别视频中多个帧图像中的每个帧图像,获取所述帧图像的分辨率。
其中的多个帧图像,可以是待识别视频中的所有帧图像,即针对待识别视频中的每一帧图像,都执行101~105确定清晰度;或者也可以是从待识别视频按照一定规则抽取的多个帧图像,例如可以采用每隔若干帧图像抽取一帧图像的方式,也可以采用随机抽取多帧图像的方式,从待识别视频中抽取图像得到所述多个帧图像,分别针对每一帧图像执行101~105确定清晰度。本申请实施例对所述多个帧图像是否待识别视频中的全部帧图像、具体数量和抽取方式不做限制。
本申请实施例中的待识别视频,可以为采用任意视频编码标准、任意格式编码得到的视频,例如,基于常用的H.264/AVC视频编码标准,对原始视频数据编码得到的视频。本申请实施例对待识别视频的编码标准和编码格式不做限制。
102、基于预设清晰度识别算法,计算所述帧图像的清晰度指标值。
本申请实施例中的预设清晰度识别算法,可以是任意可以计算清晰度的算法,例如可以包括但不限于以下算法:边缘检测算法(canny算法),拉普拉斯(laplas)算法,梯度评价函数(Brenner),梯度函数(Tenengrad),等等,本公开实施例对具体采用的清晰度识别算法不做限制。可以利用预设清晰度识别算法对所述帧图像中的像素进行计算处理,以针对所述帧图像中像素计算处理得到的平均值作为清晰度指标值。
103、采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像。
104、基于所述预设清晰度识别算法,计算所述压缩后帧图像的清晰度指标值。
105、基于所述帧图像的分辨率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度。
106、基于所述多个帧图像的清晰度,确定所述待识别视频的清晰度。
例如,在其中一些实现方式中,可以基于所述多个帧图像的清晰度的平均值作为所述待识别视频的清晰度。
或者,在另一些实现方式中,也可以根据所述多个帧图像的清晰度是否存在不 满足预设清晰度标准的帧图像,来确定所述待识别视频的清晰度是否满足预设清晰度标准,若存在不满足预设清晰度标准的帧图像,则确定所述待识别视频的清晰度不满足预设清晰度标准;否则,若不存在不满足预设清晰度标准的帧图像,则确定所述待识别视频的清晰度满足预设清晰度标准。
或者,在又一些实现方式中,也可以根据所述多个帧图像的清晰度中不满足预设清晰度标准的帧图像与所述多个帧图像的比例是否大于一定阈值(例如10%),来确定所述待识别视频的清晰度是否满足预设清晰度标准,若所述比例大于一定阈值,则确定所述待识别视频的清晰度不满足预设清晰度标准;否则,若所述比例不大于一定阈值,则确定所述待识别视频的清晰度满足预设清晰度标准。
需要说明的是,101~106的执行主体的部分或全部可以为位于终端的应用,或者还可以为设置终端的应用中的插件或软件开发工具包(Software Development Kit,SDK)等功能单元,或者还可以为位于网络侧服务器(例如视频网站、自媒体平台)中的应用,本申请实施例对此不进行特别限定。
可以理解的是,所述应用可以是安装在终端或网络侧服务器上的本地程序(nativeApp),或者还可以是终端或网络侧服务器上的浏览器的一个网页程序(webApp),本申请实施例对此不进行限定。
这样,基于视频中帧图像的分辨率,采用帧图像压缩前后的清晰度指标值,实现了对视频清晰度的识别,可以识别任一视频的清晰度是否满足要求。
另外,采用本申请所提供的技术方案,基于视频中帧图像的分辨率,基于视频压缩前后的清晰度指标值,来确定视频的清晰度,实现了对不同视频清晰度的统一标准识别,可以客观比较不同视频的清晰度,使得对视频清晰度的衡量更客观、统一。
另外,采用本申请所提供的技术方案,确定视频的清晰度,可以基于统一标准对不同清晰度的视频进行筛选,有助于节省视频网站、自媒体平台宝贵而有限的存储资源和维护资源,提高资源利用率。
另外,采用本申请所提供的技术方案,确定视频的清晰度,可以基于统一标准对不同清晰度的视频进行筛选,这样,在向用户推荐视频时,可以仅向用户推荐清晰度满足要求的视频,提高用户观看体验、节省用户流量。
可选地,在其中一些实现方式中,通过101获取到所述帧图像的分辨率之后,还可以基于所述帧图像的分辨率确定所述帧图像的分辨率范围。相应地,在105中,可以基于所述帧图像的分辨率范围,至少基于所述帧图像的清晰度指标值和所述压 缩后帧图像的清晰度指标值,确定所述帧图像的清晰度。
其中,视频图像的分辨率(Video Graphics Array,VGA)是单位英寸中所包含的像素点数(Pixels Per Inch,PPI)。分辨率影响图像大小,与图像大小成正比,在码率一定的情况下,分辨率越高,图像越大;分辨率越低,图像越小。在码率一定的情况下,分辨率与清晰度成反比关系,分辨率越高,图像越不清晰,分辨率越低,图像越清晰。
基于本实施例,可以根据帧图像的分辨率范围,采用相应的计算方式来确定所述帧图像的清晰度,提高了帧图像的清晰度的获取效率和准确性。
图2为本申请一实施例中确定所述帧图像的清晰度一个具体示例的流程示意图。如图2所示,在图1所示实施例的基础上,基于所述帧图像的分辨率范围,至少基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度,可以通过如下方式实现:
确定所述帧图像的分辨率范围,若所述帧图像的分辨率大于第一预设分辨率(例如1000PPI),执行201~202;若所述帧图像的分辨率大于第二预设分辨率(例如700PPI)且不大于第一预设分辨率(例如1000PPI),执行203~205,其中,第二预设分辨率小于第一预设分辨率;若所述帧图像的分辨率大于第三预设分辨率(480PPI)且不大于第二预设分辨率(例如700PPI),执行206~207,其中,第三预设分辨率小于第二预设分辨率。
201、基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,计算清晰度指标值的变化率。
例如,在其中一些实现方式中,若采用的预设清晰度识别算法为canny算法,则通过操作102得到所述帧图像的清晰度指标值表示为canny算法平均值b,通过操作104计算所述压缩后帧图像的清晰度指标值表示为canny算法平均值a,则所述清晰度指标值的变化率表示为Canny变化率m,可以通过如下方式计算:m=(b-a)/b。
202、基于所述清晰度指标值的变化率是否大于第一预设变化率(例如0.05),确定所述帧图像的清晰度是否满足预设清晰度标准。
具体来说,可以在所述清晰度指标值的变化率大于第一预设变化率时,确定所述帧图像的清晰度满足预设清晰度标准;否则,在所述清晰度指标值的变化率不大于第一预设变化率时,确定所述帧图像的清晰度不满足预设清晰度标准。
203、基于所述帧图像的码率和所述压缩后帧图像的码率计算码率变化率,基 于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值计算清晰度指标值的变化率。
其中,码率是编码器每秒编出的数据大小,单位是kbps,比如800kbps代表编码器每秒产生800kb(或100KB)的数据。在分辨率一定的情况下,码率与清晰度成正比关系,码率越高,图像越清晰;码率越低,图像越不清晰。
例如,在其中一些实现方式中,假设所述待识别视频或者所述帧图像压缩前的码率为e,压缩后对应的压缩后帧图像的码率为f,可以通过如下方式计算码率变化率n:n=(e-f)/e。
204、基于所述码率变化率和所述清晰度指标值的变化率计算码率和清晰度指标值的综合变化率。
205、基于所述综合变化率是否大于第二预设变化率(0.5),确定所述帧图像的清晰度是否满足预设清晰度标准。
具体来说,可以在所述综合变化率大于第二预设变化率时,确定所述帧图像的清晰度满足预设清晰度标准;否则,在所述综合变化率不大于第二预设变化率时,确定所述帧图像的清晰度不满足预设清晰度标准。
206、基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值计算清晰度指标值的变化率。
207、基于所述帧图像的码率和所述清晰度指标值的变化率,确定所述帧图像的清晰度是否满足预设清晰度标准。
其中,201、203和206分别为基于所述帧图像的分辨率范围择一执行的操作。
基于本实施例,提供了不同分辨率位于不同分辨率范围时帧图像的清晰度的确定方式,可以分别针对各个分辨率范围内的帧图像提供统一的标准,来快速、准确的确定帧图像的清晰度是否满足预设清晰度标准,提高了帧图像的清晰度是否满足要求的确定效率和准确性。
可选地,在其中一些实现方式中,在204中,可以通过对所述码率变化率和所述清晰度指标值的变化率进行加权计算,得到所述综合变化率。其中,所述清晰度指标值的权重大于所述码率变化率的权重。
例如,在一些可选示例中,针对码率变化率n和Canny变化率m,可以通过如下方式加权计算得到综合变化率t:t=(m*k+n)/2,其中,k的取值为大于1的数值。
基于本实施例,由于清晰度指标值的变化率通常小于码率变化率,采用清晰度指标值的权重大于所述码率变化率的权重的方式,对码率变化率和清晰度指标值的 变化率进行加权计算,得到的综合变化率可以更客观、准确的反应帧图像压缩前后的变化率,基于该综合变化率有助于更客观、准确的确定帧图像的清晰度是否满足预设清晰度标准。
可选地,在其中一些实现方式中,在207中,可以比较所述帧图像的码率是否大于预设码率(例如650kbps)。若所述帧图像的码率大于预设码率,则基于所述清晰度指标值的变化率是否大于第三预设变化率(0.05),确定所述帧图像的清晰度是否满足预设清晰度标准,具体来说,若所述清晰度指标值的变化率大于第三预设变化率(0.05),确定所述帧图像的清晰度满足预设清晰度标准;否则,确定所述帧图像的清晰度不满足预设清晰度标准。否则,若所述帧图像的码率不大于预设码率(例如650kbps),基于所述清晰度指标值的变化率是否大于第四预设变化率(0.1),确定所述帧图像的清晰度是否满足预设清晰度标准,具体来说,若所述清晰度指标值的变化率大于第四预设变化率(0.1)时,确定所述帧图像的清晰度满足预设清晰度标准;否则,确定所述帧图像的清晰度不满足预设清晰度标准。其中,所述第四预设变化率大于所述第三预设变化率。
基于本实施例,在帧图像的分辨率大于第三预设分辨率(480PPI)且不大于第二预设分辨率(例如700PPI)时,若帧图像的码率是否大于预设码率(例如650kbps),帧图像相对较为清晰,此时,仅基于帧图像的码率和清晰度指标值的变化率便可以确定帧图像的清晰度满足预设清晰度标准,提高了帧图像的清晰度的确定效率。
进一步可选地,再参见图2,基于所述帧图像的分辨率范围,至少基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度时,还可以包括:若所述帧图像的分辨率不大于第四预设分辨率(480PPI),可以直接确定所述待识别视频的清晰度不满足预设清晰度标准,从而提高清晰度识别效率。其中,第四预设分辨率小于第三预设分辨率。
如图3所示,为图2所示实施例中确定帧图像的清晰度是否满足预设清晰度标准的一个具体应用示例图。
可选地,在其中一些实现方式中,具体可以基于所述多个帧图像的清晰度是否满足预设清晰度标准,确定所述待识别视频的清晰度是否满足预设清晰度标准。
例如,在一些实现方式中,可以根据所述多个帧图像的清晰度是否存在不满足预设清晰度标准的帧图像,来确定所述待识别视频的清晰度是否满足预设清晰度标准,若存在不满足预设清晰度标准的帧图像,则确定所述待识别视频的清晰度不满足预设清晰度标准;否则,若不存在不满足预设清晰度标准的帧图像,则确定所述 待识别视频的清晰度满足预设清晰度标准。
或者,在又一些实现方式中,也可以根据所述多个帧图像的清晰度中不满足预设清晰度标准的帧图像与所述多个帧图像的比例是否大于一定阈值(例如10%),来确定所述待识别视频的清晰度是否满足预设清晰度标准,若所述比例大于一定阈值,则确定所述待识别视频的清晰度不满足预设清晰度标准;否则,若所述比例不大于一定阈值,则确定所述待识别视频的清晰度满足预设清晰度标准。
可选地,在其中一些实现方式中,在103中,具体可以采用视频压缩算法(FFmpeg),设置恒定质量编码参数(CRF)的取值为大于28且不大于51的一个数值,对所述帧图像进行压缩,得到所述压缩后帧图像。
其中,CRF是恒定质量的编码方式的参数,通常,为了获取恒定质量的编码,可以通过用同样的大小去压缩每一个相同类型的帧图像,即,扔掉相对来说相同数量的信息,即使用相同的量化参数(QP)。这个量化参数QP定义了从一个像素宏块中丢掉多少信息。
CRF的取值范围为0~51,其中0为无损模式,数值越大,图像质量越差,生成的文件却越小。其中,18~28是一个合理的范围。18被认为是视觉无损的,它的输出视频质量和输入视频相当。CRF大于28时,图像即开始产生视觉损失。本实施例采用CRF的取值为大于28且不大于51的一个数值对帧图像进行压缩,得到的压缩后帧图像即为一个视觉有损图像,此时可以结合帧图像压缩前后的清晰度指标值的变化率来确定帧图像的质量(即清晰度),可以认为帧图像压缩前后的清晰度指标值的变化率越大,原始的帧图像的质量越高(即越清晰)。帧图像压缩前后的清晰度指标值的变化率越小,原始的帧图像的质量越低(即越模糊)。若帧图像压缩前后的清晰度指标值的变化率较小,即小于一定预设值(对应于上述的第一预设变化率、第二预设变化率、第三预设变化率、第四预设变化率),那么基本可以确定原帧图像也是模糊的。
可选地,在其中一些实现方式中,采用CRF的取值为38,对帧图像进行压缩,在这个标准下,不管原帧图像是否清晰,得到的压缩后帧图像基本上视觉感官都已经是模糊状态,然后通过比对原视频和压缩后视频真帧图像的清晰度指标值的变化率,若变化率较小,那么基本可以确定原视频也是模糊的。
本申请实施例中,也可以采用其他的视频压缩算法、和/或设置CRF的取值为其他取值,或者采用其他的编码参数及取值,只要满足压缩后帧图像处于视觉感官的模糊状态标准即可。本公开实施例对此不作限制。
可选地,在其中一些实现方式中,在103中,具体可以采用预设压缩方式,将所述待识别视频压缩到预设低质量标准,得到压缩后视频,所述压缩后视频包括各帧图像压缩后的图像,即包括述帧图像对应的所述压缩后图像。可选地,在其中一些实现方式中,具体可以采用FFmpeg,设置恒定质量编码参数CRF的取值为大于28且不大于51的一个数值,将所述待识别视频压缩到预设低质量标准,得到压缩后视频。
图4为本申请另一实施例提供的视频清晰度识别方法的流程示意图,如图4所示。
401、分别针对待识别视频中多个帧图像中的每个帧图像,获取所述帧图像的分辨率。
本申请实施例中的待识别视频,可以为采用任意视频编码标准、任意格式编码得到的视频,例如,基于常用的H.264/AVC视频编码标准,对原始视频数据编码得到的视频。本申请实施例对待识别视频的编码标准和编码格式不做限制。
402、获取所述待识别视频的码率,作为所述帧图像的码率。
其中,待识别视频中每个帧图像的码率相同,可以直接从编码器获取该待识别视频的码率,作为其中各帧图像的码率。
403、基于预设清晰度识别算法,计算所述帧图像的清晰度指标值。
其中,操作402和403不存在执行顺序限制,二者可以同时执行,也可以以任意先后顺序执行,或者以任意先后时差执行,本申请实施例对此不作限制。
404、采用预设压缩方式,将所述待识别视频压缩到预设低质量标准,得到压缩后视频。
其中,所述压缩后视频包括所述帧图像对应的所述压缩后图像。
405,获取所述压缩后帧图像的码率。
406、基于所述预设清晰度识别算法,计算所述压缩后帧图像的清晰度指标值。
其中,操作405和406不存在执行顺序限制,二者可以同时执行,也可以以任意先后顺序执行,或者以任意先后时差执行,本申请实施例对此不作限制。
407、基于所述帧图像的分辨率范围,基于所述帧图像的码率和所述压缩后帧图像的码率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值中的部分或全部信息,确定所述帧图像的清晰度是否满足预设清晰度标准。
其中,该操作407具体实现方式可以参考上述图2所示实施例,此处不再赘述。
408、基于所述多个帧图像的清晰度是否满足预设清晰度标准,确定所述待识 别视频的清晰度是否满足预设清晰度标准。
基于本实施例,可以通过对视频压缩的方式,根据视频中帧图像的分辨率范围,基于所述帧图像的码率和所述压缩后帧图像的码率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值中的部分或全部信息,来确定所述帧图像的清晰度是否满足预设清晰度标准,进而确定所述待识别视频的清晰度是否满足预设清晰度标准,可以客观评价各视频的清晰度,使得对视频清晰度的衡量更客观、统一。
另外,采用本申请所提供的技术方案,确定视频的清晰度是否满足预设清晰度标准,可以基于统一标准对不同清晰度的视频进行筛选,有助于节省视频网站、自媒体平台宝贵而有限的存储资源和维护资源,提高资源利用率。
另外,采用本申请所提供的技术方案,确定视频的清晰度是否满足预设清晰度标准,可以基于统一标准对不同清晰度的视频进行筛选,这样,在向用户推荐视频时,可以仅向用户推荐清晰度满足要求的视频,提高用户观看体验、节省用户流量。
进一步可选地,在上述图4所示实施例中,通过操作408确定出所述待识别视频的清晰度是否满足预设清晰度标准后,若所述待识别视频的清晰度不满足预设清晰度标准,可以直接丢弃所述待识别视频,从而节省存储资源和维护资源。若所述待识别视频的清晰度满足预设清晰度标准,可以存储所述待识别视频,以便进一步推荐给用户。
本申请的技术方案可以适用于终端、视频服务器(例如视频网站、自媒体平台)等任意设备中的应用,例如,任意视频数据处理类应用、播放类应用中。利用视频数据处理类应用、播放类应用,执行本申请实施例提供的视频清晰度识别方法时,基于视频中帧图像的分辨率,采用帧图像压缩前后的清晰度指标值,实现了对视频清晰度的识别,可以识别任一视频的清晰度是否满足要求。
另外,采用本申请所提供的技术方案,基于视频中帧图像的分辨率,基于视频压缩前后的清晰度指标值,来确定视频的清晰度,实现了对不同视频清晰度的统一标准识别,可以客观比较不同视频的清晰度,使得对视频清晰度的衡量更客观、统一。
另外,采用本申请所提供的技术方案,确定视频的清晰度,可以基于统一标准对不同清晰度的视频进行筛选,有助于节省视频网站、自媒体平台宝贵而有限的存储资源和维护资源,提高资源利用率。
另外,采用本申请所提供的技术方案,确定视频的清晰度,可以基于统一标准对不同清晰度的视频进行筛选,这样,在向用户推荐视频时,可以仅向用户推荐清 晰度满足要求的视频,提高用户观看体验、节省用户流量。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
图5示出了适于用来实现本申请实施方式的示例性计算机系统/服务器12的框图。图5显示的计算机系统/服务器12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图5所示,计算机系统/服务器12以通用计算设备的形式表现。计算机系统/服务器12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,存储装置或者系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
计算机系统/服务器12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机系统/服务器12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存存储器32。计算机系统/服务器12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图5未显示,通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。系统存储器28可以包括至少一个 程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如系统存储器28中,这样的程序模块42包括——但不限于——操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本申请所描述的实施例中的功能和/或方法。
计算机系统/服务器12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机系统/服务器12交互的设备通信,和/或与使得该计算机系统/服务器12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口44进行。并且,计算机系统/服务器12还可以通过网络适配器20与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图5所示,网络适配器20通过总线18与计算机系统/服务器12的其它模块通信。应当明白,尽管图中未示出,可以结合计算机系统/服务器12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现图1~图4所对应的实施例任一实施例所提供的方法。
本申请另一实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现图1~图4所对应的实施例任一实施例所提供的方法。
具体来说,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信 号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程序程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或页面组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可 以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一个计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁盘或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (13)

  1. 一种视频清晰度识别方法,其特征在于,包括:
    分别针对待识别视频中多个帧图像中的每个帧图像,获取所述帧图像的分辨率;
    基于预设清晰度识别算法,计算所述帧图像的清晰度指标值;
    采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像;
    基于所述预设清晰度识别算法,计算所述压缩后帧图像的清晰度指标值;
    基于所述帧图像的分辨率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度;
    基于所述多个帧图像的清晰度,确定所述待识别视频的清晰度。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    基于所述帧图像的分辨率确定所述帧图像的分辨率范围;
    所述基于所述帧图像的分辨率、所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度,包括:
    基于所述帧图像的分辨率范围,至少基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述帧图像的分辨率范围,至少基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,确定所述帧图像的清晰度,包括:
    若所述帧图像的分辨率大于第一预设分辨率,基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值,计算清晰度指标值的变化率;基于所述清晰度指标值的变化率是否大于第一预设变化率,确定所述帧图像的清晰度是否满足预设清晰度标准;
    若所述帧图像的分辨率大于第二预设分辨率且不大于第一预设分辨率,基于所述帧图像的码率和所述压缩后帧图像的码率计算码率变化率,基于所述帧图像的清晰度指标值和所述压缩后帧图像的清晰度指标值计算清晰度指标值的变化率;基于所述码率变化率和所述清晰度指标值的变化率计算码率和清晰度指标值的综合变化率;基于所述综合变化率是否大于第二预设变化率,确定所述帧图像的清晰度是否满足预设清晰度标准;
    若所述帧图像的分辨率大于第三预设分辨率且不大于第二预设分辨率,基于所述帧 图像的清晰度指标值和所述压缩后帧图像的清晰度指标值计算清晰度指标值的变化率;基于所述帧图像的码率和所述清晰度指标值的变化率,确定所述帧图像的清晰度是否满足预设清晰度标准。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述码率变化率和所述清晰度指标值的变化率计算码率和清晰度指标值的综合变化率,包括:
    对所述码率变化率和所述清晰度指标值的变化率进行加权计算,得到所述综合变化率;其中,所述清晰度指标值的权重大于所述码率变化率的权重。
  5. 根据权利要求3所述的方法,其特征在于,所述基于所述帧图像的码率和所述清晰度指标值的变化率,确定所述帧图像的清晰度是否满足预设清晰度标准,包括:
    比较所述帧图像的码率是否大于预设码率;
    若所述帧图像的码率大于预设码率,基于所述清晰度指标值的变化率是否大于第三预设变化率,确定所述帧图像的清晰度是否满足预设清晰度标准;
    否则,若所述帧图像的码率不大于预设码率,基于所述清晰度指标值的变化率是否大于第四预设变化率,确定所述帧图像的清晰度是否满足预设清晰度标准;其中,所述第四预设变化率大于所述第三预设变化率。
  6. 根据权利要求3~5任一权利要求所述的方法,其特征在于,还包括:
    若所述帧图像的分辨率不大于所述第三预设分辨率,确定所述待识别视频的清晰度不满足预设清晰度标准。
  7. 根据权利要求3~5任一权利要求所述的方法,其特征在于,还包括:
    获取所述待识别视频的码率,作为所述待识别视频中每个帧图像的码率;
    获取所述压缩后帧图像的码率。
  8. 根据权利要求3~5任一权利要求所述的方法,其特征在于,所述基于所述多个帧图像的清晰度,确定所述待识别视频的清晰度,包括:
    基于所述多个帧图像的清晰度是否满足预设清晰度标准,确定所述待识别视频的清晰度是否满足预设清晰度标准。
  9. 根据权利要求8所述的方法,其特征在于,还包括:
    若所述待识别视频的清晰度不满足预设清晰度标准,丢弃所述待识别视频。
  10. 根据权利要求1~5任一权利要求所述的方法,其特征在于,所述采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像,包括:
    采用视频压缩算法FFmpeg,设置恒定质量编码参数CRF的取值为大于28且不大于51的一个数值,对所述帧图像进行压缩,得到所述压缩后帧图像。
  11. 根据权利要求1~5任一权利要求所述的方法,其特征在于,所述采用预设压缩方式,将所述帧图像压缩到预设低质量标准,得到压缩后帧图像,包括:
    采用预设压缩方式,将所述待识别视频压缩到预设低质量标准,得到压缩后视频,所述压缩后视频包括所述帧图像对应的所述压缩后图像。
  12. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1~11任一权利要求所述的方法。
  13. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1~11任一权利要求所述的方法。
PCT/CN2021/118231 2020-09-17 2021-09-14 视频清晰度识别方法、电子设备及存储介质 WO2022057789A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010982081.5 2020-09-17
CN202010982081.5A CN112135140B (zh) 2020-09-17 2020-09-17 视频清晰度识别方法、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022057789A1 true WO2022057789A1 (zh) 2022-03-24

Family

ID=73841799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/118231 WO2022057789A1 (zh) 2020-09-17 2021-09-14 视频清晰度识别方法、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN112135140B (zh)
WO (1) WO2022057789A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225961A (zh) * 2022-04-22 2022-10-21 上海赛连信息科技有限公司 一种无参考网络视频质量评价方法和装置
CN117041625A (zh) * 2023-08-02 2023-11-10 成都梵辰科技有限公司 一种超高清视频图像质量检测网络构建方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135140B (zh) * 2020-09-17 2023-11-28 上海连尚网络科技有限公司 视频清晰度识别方法、电子设备及存储介质
CN113436137A (zh) * 2021-03-12 2021-09-24 北京世纪好未来教育科技有限公司 一种图像清晰度识别方法、装置、设备及介质
CN113392241B (zh) * 2021-06-29 2023-02-03 中海油田服务股份有限公司 测井图像清晰度的识别方法、装置、介质及电子设备
CN113724225B (zh) * 2021-08-31 2024-04-09 北京达佳互联信息技术有限公司 应用程序传输质量的确定方法及装置
CN115396664A (zh) * 2022-08-19 2022-11-25 上海哔哩哔哩科技有限公司 视频质量的评价方法、装置、存储介质及计算机系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090310016A1 (en) * 2006-09-07 2009-12-17 Canon Kabushiki Kaisha Video output apparatus and control method thereof
CN103581662A (zh) * 2012-07-26 2014-02-12 腾讯科技(深圳)有限公司 视频清晰度测量方法和系统
CN107833214A (zh) * 2017-11-03 2018-03-23 北京奇虎科技有限公司 视频清晰度检测方法、装置、计算设备及计算机存储介质
CN109831680A (zh) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 一种视频清晰度的评价方法及装置
CN110769296A (zh) * 2019-10-30 2020-02-07 杭州叙简科技股份有限公司 一种传输时基于本地缓存的视频码率自适应调节方式
CN112135140A (zh) * 2020-09-17 2020-12-25 上海连尚网络科技有限公司 视频清晰度识别方法、电子设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5805001B2 (ja) * 2012-04-13 2015-11-04 三菱電機株式会社 画像鮮鋭度評価装置
CN104601999A (zh) * 2014-12-31 2015-05-06 乐视网信息技术(北京)股份有限公司 一种基于关键帧的编码方法及装置
CN105489194B (zh) * 2015-11-24 2018-09-04 小米科技有限责任公司 一种显示图像的方法和装置
CN107958455B (zh) * 2017-12-06 2019-09-20 百度在线网络技术(北京)有限公司 图像清晰度评估方法、装置、计算机设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090310016A1 (en) * 2006-09-07 2009-12-17 Canon Kabushiki Kaisha Video output apparatus and control method thereof
CN103581662A (zh) * 2012-07-26 2014-02-12 腾讯科技(深圳)有限公司 视频清晰度测量方法和系统
CN107833214A (zh) * 2017-11-03 2018-03-23 北京奇虎科技有限公司 视频清晰度检测方法、装置、计算设备及计算机存储介质
CN109831680A (zh) * 2019-03-18 2019-05-31 北京奇艺世纪科技有限公司 一种视频清晰度的评价方法及装置
CN110769296A (zh) * 2019-10-30 2020-02-07 杭州叙简科技股份有限公司 一种传输时基于本地缓存的视频码率自适应调节方式
CN112135140A (zh) * 2020-09-17 2020-12-25 上海连尚网络科技有限公司 视频清晰度识别方法、电子设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225961A (zh) * 2022-04-22 2022-10-21 上海赛连信息科技有限公司 一种无参考网络视频质量评价方法和装置
CN115225961B (zh) * 2022-04-22 2024-01-16 上海赛连信息科技有限公司 一种无参考网络视频质量评价方法和装置
CN117041625A (zh) * 2023-08-02 2023-11-10 成都梵辰科技有限公司 一种超高清视频图像质量检测网络构建方法及系统
CN117041625B (zh) * 2023-08-02 2024-04-19 成都梵辰科技有限公司 一种超高清视频图像质量检测网络构建方法及系统

Also Published As

Publication number Publication date
CN112135140B (zh) 2023-11-28
CN112135140A (zh) 2020-12-25

Similar Documents

Publication Publication Date Title
WO2022057789A1 (zh) 视频清晰度识别方法、电子设备及存储介质
JP6928041B2 (ja) 動画を処理するための方法および装置
CN109844736B (zh) 概括视频内容
TW201914300A (zh) 一種影像資料的編碼、解碼方法及裝置
US9609338B2 (en) Layered video encoding and decoding
US20190026555A1 (en) Image compression using content categories
WO2021056737A1 (zh) 高频业务数据的数据压缩方法、装置、设备及存储介质
CN113542795A (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
US11521025B2 (en) Selective image compression of an image stored on a device based on user preferences
US10880560B2 (en) Content-based transcoder
CN110248195B (zh) 用于输出信息的方法和装置
CN114245209B (zh) 视频分辨率确定、模型训练、视频编码方法及装置
US9053526B2 (en) Method and apparatus for encoding cloud display screen by using application programming interface information
WO2024139166A1 (zh) 视频编码方法及装置、电子设备和存储介质
US20130039429A1 (en) Computer display content coding method and system
CN114071190A (zh) 云应用视频流处理方法、相关装置及计算机程序产品
US20140009563A1 (en) Non-video codecs with video conferencing
US10764578B2 (en) Bit rate optimization system and method
US9997132B2 (en) Data transmission method, data transmission system and portable display device of transmitting compressed data
JP2022546774A (ja) イントラ予測のための補間フィルタリング方法と装置、コンピュータプログラム及び電子装置
CN113409199A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN108765503B (zh) 一种肤色检测方法、装置及终端
CN113111200B (zh) 审核图片文件的方法、装置、电子设备和存储介质
US20160105731A1 (en) Systems and methods for identifying and acquiring information regarding remotely displayed video content
CN117278765B (zh) 一种视频压缩方法、装置、设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868612

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/09/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21868612

Country of ref document: EP

Kind code of ref document: A1