CN115396661A - Method and device for determining decoding performance of equipment, electronic equipment and storage medium - Google Patents

Method and device for determining decoding performance of equipment, electronic equipment and storage medium Download PDF

Info

Publication number
CN115396661A
CN115396661A CN202210907125.7A CN202210907125A CN115396661A CN 115396661 A CN115396661 A CN 115396661A CN 202210907125 A CN202210907125 A CN 202210907125A CN 115396661 A CN115396661 A CN 115396661A
Authority
CN
China
Prior art keywords
image
determining
detection
test
decoding performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210907125.7A
Other languages
Chinese (zh)
Inventor
周霆
王鹏
王冠雄
海小梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202210907125.7A priority Critical patent/CN115396661A/en
Publication of CN115396661A publication Critical patent/CN115396661A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to a method and a device for determining decoding performance of equipment, electronic equipment and a storage medium, wherein the method for determining the decoding performance of the equipment comprises the following steps: acquiring image detection data from terminal equipment, wherein the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment; performing difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image; and determining the decoding performance of the terminal equipment based on the identification result. The embodiment of the application improves the efficiency of detecting the decoding performance of the terminal equipment, solves the problem of testing the H.265 playing capability of a large number of long-tail OTT equipment, and greatly improves the H.265 code stream playing duty ratio of the OTT equipment, thereby obviously reducing the network bandwidth cost of a company and effectively improving the playing fluency of users with low access bandwidth.

Description

Method and device for determining decoding performance of equipment, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for determining decoding performance of a device, an electronic device, and a storage medium.
Background
The number of the users at the OTT end is very high, on one hand, due to the high resolution of the television screen, the playing code rate is higher than that at the other end for ensuring the watching definition, and on the other hand, due to the good and uneven performance of the equipment, the h.264 code rate video is played by default for ensuring the playing compatibility, so that the consumed bandwidth ratio is much higher than that of the OTT equipment. Compared with H.264, the H.265 code can reduce at least half of transmission bandwidth, and if the H.265 use coverage rate of OTT equipment can be improved, on one hand, the bandwidth cost of a company can be saved, and on the other hand, the play fluency of a user with low access bandwidth can be improved.
However, because the OTT terminals have various brands and models, it is difficult to determine whether the corresponding devices support the h.265 code rate through a single dimension (such as a chip type), and the current method is manual testing, and releases the h.265 code rate playing option one by one for a sub-type number, which may cause a large number of long-tail devices to be unable to be tested, and in order to ensure playing compatibility, these devices may only play the h.264 encoded video.
Disclosure of Invention
In order to solve the technical problem or at least partially solve the technical problem, the present application provides a device decoding performance determination method, apparatus, electronic device, and storage medium.
In a first aspect, the present application provides a method for determining decoding performance of a device, including:
acquiring image detection data from terminal equipment, wherein the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment;
performing difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image;
determining the decoding performance of the terminal device based on the authentication result.
Optionally, performing difference identification on the detection image in the image detection data and the corresponding test image in the test video data to obtain an identification result, including:
identifying a first marker object in the detection image;
determining a test image corresponding to the detection image in the test video data according to the first identification object;
determining a similarity between the detection image and the test image;
and when the similarity is larger than or equal to a first threshold value, determining that the detection image passes the authentication.
Optionally, determining the similarity between the detection image and the test image comprises:
determining structural similarity between the inspection image and the test image;
if the structural similarity is larger than a preset similarity threshold value, determining that the similarity between the detection image and the test image is larger than or equal to a first threshold value;
otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, determining the similarity between the detection image and the test image comprises:
determining a perceptual hash value between the detection image and the test image;
determining that the similarity between the detected image and the test image is greater than or equal to a first threshold value when the perceptual hash value is smaller than a preset perceptual hash threshold value;
otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, determining a similarity between the detection image and the test image comprises:
determining structural similarity between the inspection image and the test image;
if the structural similarity is larger than a preset similarity threshold value, determining a perceptual hash value between the detected image and the test image;
determining that the similarity between the detected image and the test image is greater than or equal to a first threshold value when the perceptual hash value is smaller than a preset perceptual hash threshold value;
otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, identifying a first marker object in the detection image comprises:
detecting a target area containing a marking object in the detection image;
identifying an identifying object in the target area;
verifying the text validity of the identification object;
and if the text validity of the identification object passes the verification, converting the identification object into a preset character type to obtain the first identification object.
Optionally, performing difference identification on the detection image in the image detection data and the corresponding test image in the test video data to obtain an identification result, further comprising:
acquiring an image shaking range of the detected image;
and when the image shaking range is smaller than or equal to a preset second threshold value, determining that the detected image passes the authentication.
Optionally, the method further comprises:
acquiring device description data from the terminal device;
determining the equipment type corresponding to each image detection data based on the equipment description data;
and determining the decoding performance of the terminal equipment corresponding to each equipment type based on the image detection data corresponding to each equipment type.
Optionally, determining, based on the image detection data corresponding to each device type, a decoding performance of a terminal device corresponding to each device type includes:
determining a detection image sequence according to the image detection data aiming at the image detection data corresponding to each equipment type, and determining the equipment type of each detection image in the detection image sequence;
and determining the decoding performance of the terminal equipment of the equipment type based on the identification result of the detection image corresponding to each equipment type.
Optionally, determining, based on the identification result of the detection image corresponding to each device class, the decoding performance of the terminal device of the device class includes:
determining an identification passing rate corresponding to each equipment type based on the identification result of the detection images corresponding to each equipment type, wherein the identification passing rate is determined according to the ratio of the number of the detection images passing the identification to the number of all the detection images;
if the identification passing rate corresponding to any equipment type exceeds a preset passing rate threshold value, determining that the decoding performance of the terminal equipment of the equipment type supports a preset encoding mode;
otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
Optionally, determining, based on the identification result of each of the detection images, the decoding performance of the terminal device of the device class includes:
determining an image detection success rate corresponding to each equipment type, wherein the image detection success rate is determined according to the ratio of the number of the detection images to the number of the test images;
if the image detection success rate is larger than a preset detection threshold value, determining an identification passing rate corresponding to each equipment type, wherein the identification passing rate is determined according to the ratio of the number of the detection images passing the identification to the number of all the detection images;
if the authentication passing rate is greater than a preset authentication threshold value, determining that the decoding performance of the terminal equipment of the equipment type supports a preset encoding mode;
otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
In a second aspect, the present application provides an apparatus for determining decoding performance of a device, including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring image detection data from terminal equipment, and the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment;
the identification module is used for carrying out difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image;
a first determining module, configured to determine, based on the authentication result, a decoding performance of the terminal device.
Optionally, the authentication module comprises:
a recognition unit configured to recognize a first identification object in the detection image;
a first determining unit, configured to determine, in the test video data, a test image corresponding to the detection image according to the first identification object;
a second determination unit configured to determine a similarity between the detection image and the test image;
and the third determining unit is used for determining that the detection image passes the authentication when the similarity is greater than or equal to a first threshold value.
Optionally, the second determining unit includes:
a first determining subunit configured to determine a structural similarity between the detection image and the test image;
a second determining subunit, configured to determine that, if the structural similarity is greater than a preset similarity threshold, a similarity between the detected image and the test image is greater than or equal to a first threshold; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, the second determining unit includes:
a third determining subunit, configured to determine a perceptual hash value between the detection image and the test image;
a fourth determining subunit, configured to determine that the similarity between the detected image and the test image is greater than or equal to a first threshold if the perceptual hash value is smaller than a preset perceptual hash threshold; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, the second determining unit includes:
a fifth determining subunit, configured to determine structural similarity between the detection image and the test image;
a sixth determining subunit, configured to determine a perceptual hash value between the detected image and the test image if the structural similarity is greater than a preset similarity threshold;
a seventh determining subunit, configured to determine that, if the perceptual hash value is smaller than a preset perceptual hash threshold, a similarity between the detected image and the test image is greater than or equal to a first threshold; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, the identification unit comprises:
a detection subunit, configured to detect a target region containing a marker object in the detection image;
the identification subunit is used for identifying the identification object in the target area;
the verification subunit is used for verifying the text validity of the identification object;
and the conversion subunit is used for converting the identification object into a preset character type to obtain the first identification object if the text validity of the identification object passes the verification.
Optionally, the authentication module further comprises:
an acquisition unit configured to acquire an image shake range of the detection image;
and the fourth determining unit is used for determining that the detected image passes the authentication when the image shaking range is smaller than or equal to a preset second threshold value.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain device description data from the terminal device;
the second determining module is used for determining the equipment type corresponding to each image detection data based on the equipment description data;
and the third determining module is used for determining the decoding performance of the terminal equipment corresponding to each equipment type based on the image detection data corresponding to each equipment type.
Optionally, the third determining module includes:
a fifth determining unit, configured to determine, for image detection data corresponding to each device type, a detection image sequence according to the image detection data, and determine a device type to which each of the detection images in the detection image sequence belongs;
a sixth determining unit configured to determine, based on the result of the identification of the detection image corresponding to each device class, decoding performance of the terminal device of the device class.
Optionally, the third determining module includes:
a seventh determining unit configured to determine an authentication pass rate corresponding to each device category based on an authentication result of the detection images corresponding to each device category, the authentication pass rate being determined according to a ratio between the number of the detection images that have passed the authentication and the number of all the detection images;
an eighth determining unit, configured to determine that, if the authentication passing rate corresponding to any device class exceeds a preset passing rate threshold, the decoding performance of the terminal device of the device class supports a preset encoding mode; otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
Optionally, the third determining module includes:
a ninth determining unit, configured to determine an image detection success rate corresponding to each device category, where the image detection success rate is determined according to a ratio between the number of the detection images and the number of the test images;
a tenth determining unit, configured to determine, if the image detection success rate is greater than a preset detection threshold, an authentication passing rate corresponding to each device category, where the authentication passing rate is determined according to a ratio between the number of detected images that pass authentication and the number of all detected images;
an eleventh determining unit, configured to determine that, if the qualification passing rate is greater than a preset qualification threshold, the decoding performance of the terminal device of the device class supports a preset coding mode; otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
In a third aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor configured to implement the method for determining decoding performance of the device according to any one of the first aspect when executing a program stored in a memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a program of a device decoding performance determining method, which when executed by a processor, implements the steps of the device decoding performance determining method of any one of the first aspects.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the method comprises the steps of firstly obtaining image detection data from terminal equipment, wherein the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment; then, carrying out difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image; finally, the decoding performance of the terminal equipment can be determined based on the identification result, the efficiency of detecting the decoding performance of the terminal equipment is improved, the problem of testing the H.265 playing capacity of a large number of long-tail OTT equipment is solved, the H.265 code stream playing duty ratio of the OTT equipment is greatly improved, the network bandwidth cost of a company is obviously reduced, and the playing fluency of users with low access bandwidth is effectively improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive labor.
Fig. 1 is a deployment framework diagram of a data statistics analysis system according to an embodiment of the present application;
fig. 2 is a flowchart of a method for determining decoding performance of a device according to an embodiment of the present application;
fig. 3 is a block diagram of an apparatus decoding performance determining device according to an embodiment of the present application;
fig. 4 is a structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, because the brands and models of OTT terminals are various, it is difficult to judge whether the corresponding devices support the h.265 code rate through a single dimension (such as chip types, etc.), the current method is manual testing, and the h.265 code rate playing options are released one by one for each sub-type number, which may result in that a large number of long-tail devices cannot be tested, and in order to ensure the playing compatibility, the devices may only play the h.264 coded video. Therefore, the method, the device, the electronic device and the storage medium for determining the decoding performance of the device in the embodiments of the present application can improve the efficiency of detecting the decoding performance of the terminal device, solve the problem of testing the h.265 playing capability of a large number of long-tail OTT devices, and greatly improve the h.265 code stream playing duty ratio of the OTT devices, thereby significantly reducing the network bandwidth cost of a company and effectively improving the playing smoothness of users with low access bandwidth.
In order to support the data uploading processing of millions of users on a day and dynamically adapt to the uploading peak of users on weekends, a detailed deployment framework of a data statistical analysis system is shown in fig. 1, and as shown in fig. 1, the data statistical analysis system comprises a front-end proxy server, a user data processing server, a database and a Business Intelligence (BI) data server.
The front-end proxy server is used for load balancing and scheduling distribution, and distributing the picture sequence data uploaded by each user terminal to different user data processing servers; the user data processing server writes the processing result of the picture sequence data into a database; the BI data server can access the database table at regular time, classify and count the processing result according to the information of the device name/type/version/manufacturer and the like, and dynamically generate a device white list supporting the play mode.
As shown in fig. 2, a method for determining decoding performance of a device according to an embodiment of the present application may be applied to a BI data server, where the method for determining decoding performance of a device includes:
step S101, image detection data from the terminal device is acquired.
In the embodiment of the application, the image detection data is obtained by detecting the playing effect of the test video data by the terminal equipment;
the decoding performance of the produced terminal equipment is different due to different production standards and production technologies among manufacturers; decoding performance, i.e., the ability to process video data; for example: the decoding capability of video data coded by using a preset coding mode, the capability of serially broadcasting different video data and the like.
If the video data is directly sent to the terminal device for playing, the problem that the video data cannot be played even due to the fact that the terminal device does not have corresponding decoding performance and a green screen, a stuck screen and the like occur in the playing process, and therefore user experience is affected. Therefore, in the embodiment of the present invention, before formally sending the video data to the terminal device, the decoding performance of the terminal device may be detected first, so as to determine whether the terminal device has the capability of processing the video data.
In the embodiment of the invention, the terminal equipment can play the test video data first, and the test video data can be used for detecting the decoding performance of the terminal equipment.
As an example, test video data may be produced in advance according to actual detection requirements; for example, when it is required to detect whether the terminal device has the capability of decoding video data that has been encoded by using a preset encoding method, the original video data may be encoded by using the preset encoding method, and then the original video data encoded by using the preset encoding method may be sent to the terminal device.
After receiving the encoded original video data, the terminal device may decode the encoded original video data by using a de-encoder, thereby obtaining test video data.
Since the test video data is used for detecting the decoding performance of the terminal device, the content of the video image may not have ornamental value; if the video is directly played on the screen of the terminal device, the user may be affected to use the terminal device, for example: and when the terminal equipment is in a standby state, suddenly playing the test video data. Therefore, in the embodiment of the invention, the test video data can be played on the terminal device off the screen. Playing the test video data off-screen may refer to rendering the test video data off-screen on the terminal device. Off-screen rendering refers to a GPU (Graphics Processing Unit) opening a new buffer outside the current screen buffer for rendering. By playing the test video data off the screen, the whole detection process is imperceptible to the user, and therefore the user is prevented from being influenced by using the terminal equipment.
As an example, the terminal device may refer to a device for playing video data, such as: the present invention relates to a high-definition player, a high-definition box, a smart television, a smart phone, a tablet computer, and an intelligent projector.
In the embodiment of the invention, if the terminal equipment does not have the decoding performance to be detected by the test video data, the test video data cannot be normally played; if the terminal device has the decoding performance to be detected by the test video data, the test video data can be normally played.
Therefore, the terminal device can generate the image detection data aiming at the test video data played off the screen when the test video data is played off the screen. The image detection data may refer to data collected for an image of the test video data played by the terminal device off the screen when the terminal device plays the test video data off the screen, for example: screenshots of off-screen played test video data images, and the like, which are not limited in the embodiment of the invention; the image detection data can be used for analyzing whether abnormity (such as black screen, flower screen, playing card pause, rendering area change before and after ABS switching and the like) occurs when the terminal equipment plays the test video data.
Step S102, carrying out difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result;
in the embodiment of the application, the encoding format of the detection image is different from that of the test image. The detection image in the present application adopts the coding formats in the white list supported by the device to test the coding formats with unknown support, and for example, h.265 belongs to one of the coding formats with unknown support.
In the embodiment of the application, the image detection data includes a detection image sequence generated by regularly capturing a screen during the process of playing the test image data by a group of terminal devices, the detection image sequence includes at least one detection image, the test video data includes a test image sequence, the test image sequence includes at least one test image, and the detection images in the image detection data correspond to the test images in the test image sequence one to one.
In this step, a difference between the detection image and the corresponding test image may be determined, and the authentication result may be determined based on the difference.
Specifically, the similarity between the detection image and the test image may be determined, and the detection image is determined to be authenticated when the similarity is greater than or equal to a first threshold.
Further, the image shake range of the detected image may be determined when the similarity is greater than or equal to a first threshold, and the detected image may be determined to be authenticated when the image shake range is less than or equal to a preset second threshold.
Step S103, determining the decoding performance of the terminal equipment based on the authentication result.
Since the image detection data comprises a detection image sequence, the detection image sequence comprises at least one detection image, the test video data comprises a test image sequence, and the test image sequence comprises at least one test image, the decoding performance of the terminal device can be determined based on the identification result of each detection image in the terminal device.
Specifically, an authentication passing rate corresponding to the terminal device may be calculated, where the authentication passing rate is a ratio between the number of the detection images that pass the authentication and the number of all detection images, and when the authentication passing rate exceeds a first preset threshold, it is determined that the decoding performance of the terminal device supports a preset encoding mode, otherwise, it is determined that the decoding performance of the terminal device does not support the preset encoding mode.
Or, the image detection success rate corresponding to the terminal device may be first calculated, where the image detection success rate is a ratio between the number of detected images in the detected image sequence and the number of test images in the test image sequence, and when the image detection success rate is greater than a second preset threshold, an authentication pass rate corresponding to the terminal device is calculated, where the authentication pass rate is a ratio between the number of detected images that pass authentication and the number of all detected images, and when the authentication pass rate exceeds a first preset threshold, it is determined that the decoding performance of the terminal device supports a preset encoding mode, and otherwise, it is determined that the decoding performance of the terminal device does not support the preset encoding mode.
The method comprises the steps of firstly obtaining image detection data from terminal equipment, wherein the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment; then, carrying out difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image; finally, the decoding performance of the terminal equipment can be determined based on the identification result, the efficiency of detecting the decoding performance of the terminal equipment is improved, the problem of testing the H.265 playing capacity of a large number of long-tail OTT equipment is solved, the H.265 code stream playing duty ratio of the OTT equipment is greatly improved, the network bandwidth cost of a company is obviously reduced, and the playing fluency of users with low access bandwidth is effectively improved.
In another embodiment of the present application, in step S102, performing difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, including:
step S201, identifying a first identification object in the detection image;
in this step, a target region containing a marker object may be detected in the detection image; identifying an identifying object in the target area; verifying the text validity of the identification object; and if the text validity of the identification object passes the verification, converting the identification object into a preset character type to obtain the first identification object.
Specifically, to improve the success rate of digital recognition, only the text in a specific area may be recognized, a first identification object may be detected in the detection image to obtain an identification object image, and then the object image may be subjected to image processing by using a preset image processing method (e.g., using a Leptonica algorithm), such as: decontaminating, coloring and the like, performing optical character recognition (such as OCR (optical character recognition)) on the object image subjected to image processing by using a Tesseract algorithm, verifying the text validity based on a preset rule by using the recognized identification object, and converting the identification object into a preset character type when the text validity is verified, wherein the preset character type can be a digital type to obtain the first identification object.
In an embodiment of the application, the frame identifier in each image frame in the test video data is generated in advance and set on the image frame, in order to maximize the success rate of optical character recognition, a screenshot of an area where the frame identifier in the test image is located can be used as a training sample of the optical character recognition, because the digital fonts are fixed and limited in number, the accuracy rate of OCR recognition can be greatly improved, and the practical result is that the recognition error rate can be less than 10 -6
Step S202, determining a test image corresponding to the detection image in the test video data according to the first identification object;
in the embodiment of the application, a frame identifier is arranged at the designated position of each image frame in the test video data to uniquely identify the image frame, and the detection image is generated by detecting the playing effect of the test video data (such as screenshot), so that the normal detection image necessarily comprises a first identification object which is the same as the frame identifier.
That is, the same frame identifier as the first identifier object may be determined, and then, a test image having the frame identifier may be searched for in the test image data as a test image corresponding to the detection image.
Step S203, determining the similarity between the detection image and the test image;
in one embodiment of the present application, structural similarity between the inspection image and the test image may be determined; if the structural similarity is larger than a preset similarity threshold, determining that the similarity between the detection image and the test image is larger than or equal to a first threshold; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Specifically, the structural similarity between the detection image and the test image may be determined by a Structural Similarity Index Measure (SSIM).
In another embodiment of the present application, a perceptual hash value between the detected image and the test image may be determined; determining that the similarity between the detected image and the test image is greater than or equal to a first threshold value when the perceptual hash value is smaller than a preset perceptual hash threshold value; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Specifically, the Perceptual hash value between the detection image and the test image is determined by a Perceptual hash algorithm (PH).
In another embodiment of the present application, a structural similarity between the inspection image and the test image is determined; if the structural similarity is larger than a preset similarity threshold value, determining a perceptual hash value between the detected image and the test image; determining that the similarity between the detected image and the test image is greater than or equal to a first threshold value when the perceptual hash value is smaller than a preset perceptual hash threshold value; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Specifically, the structural similarity between the detection image and the test image may be determined by a Structural Similarity Index (SSIM) algorithm, and the Perceptual hash value between the detection image and the test image may be determined by a Perceptual hash algorithm (PH).
And step S204, when the similarity is greater than or equal to a first threshold value, determining that the detection image passes the identification.
And when the similarity is smaller than a first threshold value, determining that the detection image is not authenticated.
According to the embodiment of the application, whether the detected image passes the identification can be automatically determined based on the similarity, the decoding performance of the terminal equipment can be further determined, and the detection precision and accuracy can be improved through image similarity judgment.
In another embodiment of the present application, performing difference evaluation on a detection image in the image detection data and a corresponding test image in the test video data to obtain an evaluation result, further includes:
step S301, acquiring an image shaking range of the detection image;
in this step, when the similarity between the detected image and the test image is greater than the first threshold, an image shake range of the detected image, specifically, an image shake range of the first identification object in the detected image, may be determined, and for example, the image shake range may be determined by the following formula:
k = abs (first marker object sampling time interval (seconds) frame rate-frame marker)
Step S302, when the image shaking range is smaller than or equal to a preset second threshold value, determining that the detection image passes the authentication.
According to the method and the device, whether the detected image passes the identification or not can be automatically determined based on the image shaking range, the decoding performance of the terminal equipment can be further determined, and the detection precision and accuracy can be improved through image shaking range judgment.
In an embodiment of the application, when the similarity is smaller than a first threshold, an image shaking range of the detected image may be obtained, and when the image shaking range is smaller than or equal to a preset second threshold, it may be determined that the detected image passes authentication.
According to the embodiment of the application, whether the detected image passes the identification is determined by firstly judging the similarity and then judging the image shaking range, so that the decoding performance of the terminal equipment can be determined, and the detection precision and accuracy are improved by judging the threshold value for multiple times.
In yet another embodiment of the present application, the method further comprises:
step S401, obtaining device description data from the terminal device;
in this embodiment, the device description data may refer to relevant information of the terminal device itself, for example: device vendor/brand/type/version, etc., as embodiments of the present invention are not limited in this respect.
Step S402, determining the equipment type corresponding to each image detection data based on the equipment description data;
since the device description data is the relevant information of the terminal device itself, and the device description data of different terminal devices are different, the terminal devices can be classified according to the device description data, and further, the image detection data is classified according to the device class to which each terminal device belongs, so as to obtain the device class corresponding to each image detection data.
Step S403, determining, based on the image detection data corresponding to each device type, decoding performance of the terminal device corresponding to each device type.
After obtaining the image detection data corresponding to each device class, the BI data server may determine, for the image detection data corresponding to each device class, the decoding performance of the terminal device corresponding to each device class, so as to determine whether the terminal device of each device class can normally play the test video data, and further determine whether the terminal device of each device class has the decoding performance to be detected for the test video data.
In an embodiment of the present invention, when the BI data server determines that the terminal device of any device class has (or does not have) the decoding performance to be detected for the video data to be tested, a white list of the terminal devices of each device class having the decoding performance to be detected may be dynamically generated, so that when the video data is subsequently formally transmitted to each terminal device, the transmission may be performed based on the decoding performance of each terminal device in the white list, thereby avoiding the transmission of the video data that cannot be normally played to the terminal device.
In another embodiment of the present application, the step S403 determines, based on the image detection data corresponding to each device class, decoding performance of the terminal device corresponding to each device class, including:
step S501, determining a detection image sequence according to the image detection data, and determining the equipment category of each detection image in the detection image sequence;
step S502, based on the identification result of the detection image corresponding to each equipment type, the decoding performance of the terminal equipment of the equipment type is determined.
In one embodiment of the present application, the step S502 of determining the decoding performance of the terminal device of the device class based on the identification result of each of the detection images includes:
step S601, determining an identification passing rate corresponding to each equipment type based on the identification result of the detection image corresponding to each equipment type;
in the embodiment of the present application, the qualification rate is determined according to a ratio between the number of the detection images that are qualified and the number of all the detection images, that is, the qualification rate is equal to a ratio between the number of the detection images that are qualified and the number of all the detection images.
Step S602, if the identification passing rate of the detected image in the image detection data corresponding to any equipment type exceeds a preset passing rate threshold, determining that the decoding performance of the terminal equipment of the equipment type supports a preset encoding mode;
otherwise, step S603 determines that the decoding performance of the terminal device of the device class does not support the preset encoding mode.
In the embodiment of the present application, the qualification passing rate may refer to a ratio of the qualified detection images to all the detection images, and exemplarily, the qualification passing rate is 99% or the like.
That is, in all image detection data corresponding to any device type, if the identification pass rate of the detected images in all the image detection data exceeds the preset pass rate threshold, it is determined that the decoding performance of the terminal device of the device type supports the preset encoding mode, otherwise, it is determined that the decoding performance of the terminal device of the device type does not support the preset encoding mode.
In the embodiment of the application, the decoding performance of the terminal equipment in each equipment category can be determined through the qualification passing rate, the method is simple, and the efficiency of detecting the decoding performance of the terminal equipment is improved.
In one embodiment of the present application, the step S502 of determining the decoding performance of the terminal device of the device class based on the identification result of each of the detection images includes:
step S701, determining an image detection success rate corresponding to each device type.
In the embodiment of the present application, the image detection success rate is determined according to a ratio between the number of the detection images and the number of the test images, that is, image detection success rate = number of detection images/number of test images.
Step S702, if the image detection success rate is greater than a preset detection threshold, determining the identification passing rate corresponding to each equipment type.
In the embodiment of the present application, the authentication pass rate is determined according to a ratio between the number of the detected images that pass the authentication and the number of all the detected images, that is, the authentication pass rate = the number of detected images that pass the authentication/the number of all the detected images.
Step S703, if the qualification passing rate is greater than a preset qualification threshold, determining that the decoding performance of the terminal equipment of the equipment type supports a preset coding mode;
otherwise, step S704 determines that the decoding performance of the terminal device of the device type does not support the preset coding mode.
According to the method and the device, the decoding performance of each terminal device in each device type is determined according to the image detection success rate and the identification passing success rate corresponding to each device type, and the detection precision and the accuracy are improved through multiple threshold judgment.
In another embodiment of the present application, as shown in fig. 3, there is further provided a device decoding performance determining apparatus, including:
a first obtaining module 11, configured to obtain image detection data from a terminal device, where the image detection data is obtained by detecting, by the terminal device, a playing effect of test video data;
an identification module 12, configured to perform difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, where a coding format of the detection image is different from a coding format of the test image;
a first determining module 13, configured to determine, based on the authentication result, a decoding performance of the terminal device.
Optionally, the authentication module comprises:
a recognition unit configured to recognize a first identification object in the detection image;
a first determining unit, configured to determine, in the test video data, a test image corresponding to the detection image according to the first identification object;
a second determination unit configured to determine a similarity between the detection image and the test image;
and the third determining unit is used for determining that the detection image passes the identification when the similarity is greater than or equal to a first threshold value.
Optionally, the second determining unit includes:
a first determining subunit, configured to determine a structural similarity between the detection image and the test image;
a second determining subunit, configured to determine that, if the structural similarity is greater than a preset similarity threshold, a similarity between the detected image and the test image is greater than or equal to a first threshold; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, the second determining unit includes:
a third determining subunit, configured to determine a perceptual hash value between the detection image and the test image;
a fourth determining subunit, configured to determine that the similarity between the detected image and the test image is greater than or equal to a first threshold value if the perceptual hash value is smaller than a preset perceptual hash threshold value; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, the second determining unit includes:
a fifth determining subunit, configured to determine structural similarity between the detection image and the test image;
a sixth determining subunit, configured to determine a perceptual hash value between the detected image and the test image if the structural similarity is greater than a preset similarity threshold;
a seventh determining subunit, configured to determine that, if the perceptual hash value is smaller than a preset perceptual hash threshold, a similarity between the detected image and the test image is greater than or equal to a first threshold; otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
Optionally, the identification unit includes:
a detection subunit, configured to detect, in the detection image, a target region including an identification object;
the identification subunit is used for identifying the identification object in the target area;
the verification subunit is used for verifying the text validity of the identification object;
and the conversion subunit is used for converting the identification object into a preset character type to obtain the first identification object if the text validity of the identification object passes the verification.
Optionally, the authentication module further comprises:
an acquisition unit configured to acquire an image shake range of the detection image;
and the fourth determining unit is used for determining that the detected image passes the authentication when the image shaking range is smaller than or equal to a preset second threshold value.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain device description data from the terminal device;
the second determining module is used for determining the equipment category corresponding to each image detection data based on the equipment description data;
and a third determining module, configured to determine, based on the image detection data corresponding to each device type, a decoding performance of the terminal device corresponding to each device type.
Optionally, the third determining module includes:
a fifth determining unit, configured to determine, for image detection data corresponding to each device type, a detection image sequence according to the image detection data, and determine a device type to which each of the detection images in the detection image sequence belongs;
a sixth determining unit configured to determine, based on the result of the identification of the detection image corresponding to each device class, decoding performance of the terminal device of the device class.
Optionally, the third determining module includes:
a seventh determining unit configured to determine an authentication pass rate corresponding to each device category based on an authentication result of the detection images corresponding to each device category, the authentication pass rate being determined according to a ratio between the number of the detection images that have passed the authentication and the number of all the detection images;
an eighth determining unit, configured to determine that, if the authentication passing rate corresponding to any device class exceeds a preset passing rate threshold, the decoding performance of the terminal device of the device class supports a preset encoding mode; otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
Optionally, the third determining module includes:
a ninth determining unit, configured to determine an image detection success rate corresponding to each device category, where the image detection success rate is determined according to a ratio between the number of the detection images and the number of the test images;
a tenth determining unit, configured to determine, if the image detection success rate is greater than a preset detection threshold, an authentication passing rate corresponding to each device category, where the authentication passing rate is determined according to a ratio between the number of detected images that pass authentication and the number of all detected images;
an eleventh determining unit, configured to determine that, if the qualification passing rate is greater than a preset qualification threshold, the decoding performance of the terminal device of the device class supports a preset coding mode; otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
In another embodiment of the present application, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the device decoding performance determination method in any one of the method embodiments when executing the program stored in the memory.
In the electronic device provided by the embodiment of the invention, the processor executes the program stored in the memory to realize that the image detection data and the device description data from each terminal device are firstly obtained, then the device type corresponding to each image detection data is determined based on the device description data, and finally the decoding performance of the terminal device corresponding to each device type is automatically determined based on the image detection data corresponding to each device type, so that the efficiency of detecting the decoding performance of the terminal device is improved, the problem of testing the H.265 playing capability of a large number of long-tail OTT devices is solved, the H.265 code stream playing duty ratio of the OTT devices is greatly improved, the network bandwidth cost of a company is obviously reduced, and the playing fluency of users with low access bandwidth is effectively improved.
The communication bus 1140 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1140 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The communication interface 1120 is used for communication between the electronic device and other devices.
The memory 1130 may include a Random Access Memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor 1110 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present application, a computer-readable storage medium is further provided, on which a program of a device decoding performance determination method is stored, and when executed by a processor, the program of the device decoding performance determination method implements the steps of the device decoding performance determination method described in any of the method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is merely illustrative of particular embodiments of the invention that enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (14)

1. A method for determining decoding performance of a device, comprising:
acquiring image detection data from terminal equipment, wherein the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment;
performing difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image;
determining the decoding performance of the terminal device based on the authentication result.
2. The method of claim 1, wherein performing a difference evaluation on the detected image in the image detection data and the corresponding test image in the test video data to obtain an evaluation result comprises:
identifying a first marker object in the detection image;
determining a test image corresponding to the detection image in the test video data according to the first identification object;
determining a similarity between the detection image and the test image;
and when the similarity is larger than or equal to a first threshold value, determining that the detection image passes the authentication.
3. The method of claim 2, wherein determining the similarity between the inspection image and the test image comprises:
determining structural similarity between the inspection image and the test image;
if the structural similarity is larger than a preset similarity threshold value, determining that the similarity between the detection image and the test image is larger than or equal to a first threshold value;
otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
4. The method of claim 2, wherein determining the similarity between the inspection image and the test image comprises:
determining a perceptual hash value between the detection image and the test image;
if the perceptual hash value is smaller than a preset perceptual hash threshold, determining that the similarity between the detected image and the test image is larger than or equal to a first threshold;
otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
5. The method of claim 2, wherein determining the similarity between the inspection image and the test image comprises:
determining structural similarity between the inspection image and the test image;
if the structural similarity is larger than a preset similarity threshold value, determining a perceptual hash value between the detected image and the test image;
determining that the similarity between the detected image and the test image is greater than or equal to a first threshold value when the perceptual hash value is smaller than a preset perceptual hash threshold value;
otherwise, determining that the similarity between the detection image and the test image is less than a first threshold.
6. The method of claim 2, wherein identifying a first marker object in the inspection image comprises:
detecting a target area containing a marking object in the detection image;
identifying an identifying object in the target area;
verifying the text validity of the identification object;
and if the text validity of the identification object passes the verification, converting the identification object into a preset character type to obtain the first identification object.
7. The method of claim 1, wherein performing a difference evaluation on a detected image in the image detection data and a corresponding test image in the test video data to obtain an evaluation result, further comprises:
acquiring an image shaking range of the detected image;
and when the image shaking range is smaller than or equal to a preset second threshold value, determining that the detected image passes the authentication.
8. The method of claim 1, further comprising:
acquiring device description data from the terminal device;
determining the equipment type corresponding to each image detection data based on the equipment description data;
and determining the decoding performance of the terminal equipment corresponding to each equipment type based on the image detection data corresponding to each equipment type.
9. The method according to claim 8, wherein determining the decoding performance of the terminal device corresponding to each device class based on the image detection data corresponding to each device class comprises:
determining a detection image sequence according to the image detection data aiming at the image detection data corresponding to each equipment type, and determining the equipment type of each detection image in the detection image sequence;
and determining the decoding performance of the terminal equipment of the equipment type based on the identification result of the detection image corresponding to each equipment type.
10. The method according to claim 9, wherein determining the decoding performance of the terminal device of each device class based on the identification result of the detection image corresponding to the device class comprises:
determining an identification passing rate corresponding to each equipment type based on the identification result of the detection images corresponding to each equipment type, wherein the identification passing rate is determined according to the ratio of the number of the detection images passing the identification to the number of all the detection images;
if the identification passing rate corresponding to any equipment type exceeds a preset passing rate threshold value, determining that the decoding performance of the terminal equipment of the equipment type supports a preset encoding mode;
otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
11. The method according to claim 9, wherein determining the decoding performance of the terminal device of each device class based on the identification result of the detection image corresponding to the device class comprises:
determining an image detection success rate corresponding to each equipment category, wherein the image detection success rate is determined according to the ratio of the number of the detection images to the number of the test images;
if the image detection success rate is larger than a preset detection threshold value, determining an identification passing rate corresponding to each equipment type, wherein the identification passing rate is determined according to the ratio of the number of the detection images passing the identification to the number of all the detection images;
if the authentication passing rate is greater than a preset authentication threshold value, determining that the decoding performance of the terminal equipment of the equipment type supports a preset encoding mode;
otherwise, determining that the decoding performance of the terminal equipment of the equipment type does not support the preset coding mode.
12. An apparatus decoding performance determining device, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring image detection data from terminal equipment, and the image detection data is obtained by detecting the playing effect of test video data by the terminal equipment;
the identification module is used for carrying out difference identification on a detection image in the image detection data and a corresponding test image in the test video data to obtain an identification result, wherein the coding format of the detection image is different from that of the test image;
a first determining module for determining the decoding performance of the terminal device based on the authentication result.
13. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method for determining decoding performance of the device according to any one of claims 1 to 11 when executing a program stored in a memory.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program of a device decoding performance determining method, which when executed by a processor, implements the steps of the device decoding performance determining method of any one of claims 1 to 11.
CN202210907125.7A 2022-07-29 2022-07-29 Method and device for determining decoding performance of equipment, electronic equipment and storage medium Pending CN115396661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210907125.7A CN115396661A (en) 2022-07-29 2022-07-29 Method and device for determining decoding performance of equipment, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210907125.7A CN115396661A (en) 2022-07-29 2022-07-29 Method and device for determining decoding performance of equipment, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115396661A true CN115396661A (en) 2022-11-25

Family

ID=84119177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210907125.7A Pending CN115396661A (en) 2022-07-29 2022-07-29 Method and device for determining decoding performance of equipment, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115396661A (en)

Similar Documents

Publication Publication Date Title
US9076071B2 (en) Logo recognition
CN113613075A (en) Video recommendation method and device and cloud server
CN115396705A (en) Screen projection operation verification method, platform and system
CN110248235B (en) Software teaching method, device, terminal equipment and medium
CN114840286B (en) Service processing method and server based on big data
CN108696713B (en) Code stream safety test method, device and test equipment
CN110324707B (en) Video playing time consumption testing method and device
CN115396661A (en) Method and device for determining decoding performance of equipment, electronic equipment and storage medium
CN112560552A (en) Video classification method and device
CN115860827A (en) Mobile terminal advertisement testing method and system
CN110381308B (en) System for testing live video processing effect
CN113068021B (en) Delay testing method, device, equipment and storage medium
CN115761567A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113923443A (en) Network video recorder testing method and device and computer readable storage medium
CN110958448B (en) Video quality evaluation method, device, medium and terminal
CN113742152A (en) Screen projection test method, device, equipment and storage medium
CN114630134B (en) Processing method and system for newly added code stream
CN113297065A (en) Data processing method, game-based processing method and device and electronic equipment
CN110740347B (en) Video content detection system, method, device, server and storage medium
CN113905272B (en) Control method of set top box, electronic equipment and storage medium
CN114025240B (en) Method and device for determining television equipment capability, storage medium and electronic device
CN117475013B (en) Computer equipment and video data processing method
CN114996143A (en) Pressure testing method, device, equipment and storage medium
CN113923450A (en) Automatic image detection method, device, equipment and storage medium
CN115249309A (en) Visual verification method and system for multimedia file

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination