CN117076226B - Graphics system rendering correctness verification method based on image texture difference - Google Patents

Graphics system rendering correctness verification method based on image texture difference Download PDF

Info

Publication number
CN117076226B
CN117076226B CN202311335208.4A CN202311335208A CN117076226B CN 117076226 B CN117076226 B CN 117076226B CN 202311335208 A CN202311335208 A CN 202311335208A CN 117076226 B CN117076226 B CN 117076226B
Authority
CN
China
Prior art keywords
rendering
frame
difference
value
output result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311335208.4A
Other languages
Chinese (zh)
Other versions
CN117076226A (en
Inventor
周顺奇
温研
冯酉鹏
杨凌云
董硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Linzhuo Information Technology Co Ltd
Original Assignee
Beijing Linzhuo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Linzhuo Information Technology Co Ltd filed Critical Beijing Linzhuo Information Technology Co Ltd
Priority to CN202311335208.4A priority Critical patent/CN117076226B/en
Publication of CN117076226A publication Critical patent/CN117076226A/en
Application granted granted Critical
Publication of CN117076226B publication Critical patent/CN117076226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • G06F11/2221Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested to test input/output devices or peripheral units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2273Test methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a graphic system rendering correctness verification method based on image texture differences, which comprises the steps of executing the same rendering test program on the same platform by using a tested display card and a reference display card to obtain a rendering result video file with the same frame number, comparing the rendering result video file frame by frame to judge whether the differences exist, whether textures are contained or not, and whether the differences exist in compressed textures to obtain a frame output result so as to obtain a file output result, judging the correctness of the rendering process according to whether the file output result is greater than a threshold value, eliminating errors caused by the understanding of the compression operation by the textures, reducing the influence of environmental factors such as the platform, and effectively improving the accuracy of the rendering correctness verification.

Description

Graphics system rendering correctness verification method based on image texture difference
Technical Field
The invention belongs to the technical field of computer application development, and particularly relates to a graphic system rendering correctness verification method based on image texture differences.
Background
When the graphic system rendering correctness verification test is performed, the test thinking is to compare the rendered result, namely, the rendered result of the tested display card is compared with the standard rendered result, and if the difference is small or no, the correct rendering is indicated.
And the rendering accuracy is verified, wherein rendering results are stored as lossless video files and output as videos, and then output video files of different display cards under the same scene are compared. If two video files are compared, the same video frame number is ensured, and the corresponding algorithm can be used for calculation. When the frames of the two videos are inconsistent, the rendering results may have larger difference, so that the video of the rendering results loses comparability; in addition, currently available image and video evaluation algorithms are PSNR, SSIM and VMAF. However, PSNR and SSIM are based on objective mathematical models, VMAF is based on subjective perception of video quality by human eyes, and either method has limitations that cannot provide an accurate rendering accuracy result.
In summary, to complete the rendering correctness verification test has the following problems: firstly, a video file with aligned frames cannot be provided, and secondly, rendering accuracy results cannot be accurately calculated.
Disclosure of Invention
In view of the above, the invention provides a graphics system rendering correctness verification method based on image texture differences, which realizes the verification of the rendering correctness of an accurate graphics system.
The invention provides a graphic system rendering correctness verification method based on image texture differences, which comprises the following steps:
step 1, writing a rendering test program according to a determined tested display card, a reference display card and a test scene, and setting a rendering mode of the rendering test program to be fixed frame rate rendering;
step 2, respectively using the tested display card and the reference display card to execute a rendering test program under a set test scene to obtain a rendering result, and respectively storing the rendering result with a fixed frame number as a tested video file and a reference video file;
step 3, comparing and analyzing the tested video file and the reference video file frame by frame to obtain calculation result values under different differences as frame output results;
step 4, repeatedly executing the step 3 to complete the frame-by-frame comparison analysis of the tested video file and the reference video file to obtain frame output results of all frames, and taking the average value of the frame output results as the file output result of the comparison analysis of the tested video file and the reference video file; if the file output result is larger than the threshold value, the graphic system rendering is considered to be wrong, otherwise, the graphic system rendering is considered to be correct.
Further, the reference display card is an Injeida display card.
Further, in the step 1, the method for setting the rendering mode of the rendering test program to be the fixed frame rate rendering mode is as follows: a graphics API implementation using OpenGL, directX or Metal.
Further, in the step 2, the manner of respectively storing the rendering result of the fixed frame number as the tested video file and the reference video file is as follows: and converting the image sequence generated in the rendering process into a video file by adopting a development library provided by a display card or an operating system.
Further, in the step 3, the frame-by-frame comparison analysis is performed on the measured video file and the reference video file, and the implementation manner of obtaining the calculated result values under different differences as the frame output results is that the initial values of the output results E1 to E11 are all 0:
step 3.1, if the detected frame and the reference frame do not contain textures, setting the value of the output result E1 as a set value, and then executing step 3.2, otherwise, executing step 3.3;
step 3.2, if the detected frame and the reference frame have differences and the differences are larger than a threshold value, taking the difference ratio of the difference bytes to the total bytes in the frame as the value of an output result E2, executing step 3.6, and if the differences do not exist or the differences are not larger than the threshold value, not changing the value of the output result E3 and executing step 3.6;
step 3.3, if no difference exists between the measured frame and the reference frame, the value of the output result E4 is not changed, and the step 3.6 is executed; otherwise, scaling the measured frame and the reference frame for a plurality of times according to a set scaling ratio to obtain a plurality of scaled measured frames and scaled reference frames, comparing difference points between the scaled measured frames and the scaled reference frames under the same scaling ratio, executing step 3.4 when the difference points are positioned in the area where the texture is positioned, executing step 3.4 when the difference points are positioned outside the area where the texture is positioned and the difference is larger than a threshold value, taking the difference occupation ratio as a value of an output result E5, and executing step 3.4 when the difference points are positioned outside the area where the texture is positioned and the difference is not larger than the threshold value, without changing the value of the output result E6;
step 3.4, if the texture is a compressed texture, executing step 3.5; otherwise, when the difference is larger than the threshold value, taking the difference ratio as the value of the output result E7, and executing the step 3.6, and when the difference is not larger than the threshold value, not changing the value of the output result E8, and executing the step 3.6;
step 3.5, if the difference point is caused by decompression operation, taking the difference ratio value as the value of the output result E9 when the difference is larger than a threshold value, and not changing the value of the output result E10 when the difference is not larger than the threshold value; if the difference point is not caused by the decompression operation, setting the value of the output result E11 to be a set maximum value;
and 3.6, calculating a frame output result of each frame according to all output results, wherein the value of the frame output result is a difference value of the sum of the set maximum value and the current value of all output results, and then the value of all output results is 0.
Further, the set maximum value is 100.
Further, the judging mode of whether the difference exists between the measured frame and the reference frame is as follows: and judging based on the structural similarity.
Further, the determining manner of the texture in the step 3.4 is that: performing a fourier transform on the image comprising the texture to obtain a frequency domain representation of the image, the frequency domain representation being a set of complex numbers, each complex number representing a frequency component; calculating the amplitude spectrum of the frequency domain coefficient of each complex number, squaring the amplitude in the amplitude spectrum to obtain the energy of each frequency domain coefficient, and if the energy is uniformly distributed, obtaining the texture as a non-compressed texture, otherwise obtaining the texture as a compressed texture.
Further, the judging manner of the difference point in the step 3.5 caused by the decompression operation is as follows: if the difference points are continuous, the difference points are caused by decompression operation, otherwise, the difference points are caused by rendering errors.
Advantageous effects
According to the invention, the rendering result video files with the same frame number are obtained by executing the same rendering test program on the same platform by using the tested display card and the reference display card, whether differences exist or not, whether textures are contained or not and whether differences exist in the compressed textures or not are judged frame by frame, so that a file output result is obtained, the correctness of the rendering process is judged according to whether the file output result is greater than a threshold value, errors caused by the understanding of the textures to the compression operation are eliminated, the influence of environmental factors such as the platform is reduced, and the accuracy of verification of the rendering correctness is effectively improved.
Drawings
Fig. 1 is a schematic diagram of a video file contrast analysis flow chart of a graphics system rendering correctness verification method based on image texture differences.
Detailed Description
The present invention will be described in detail with reference to the following examples.
The current test thinking of the rendering correctness verification test is a comparison rendering result, namely, the rendering result of the tested display card is compared with a standard rendering result, and if the difference is small or no, the rendering is correct. The visual rendering test flow should thus be: firstly, the rendering result is stored as video output so as to be compared frame by frame, then videos obtained by executing the same test program under the same scene by different display cards are compared, and judgment is carried out according to the comparison result.
The invention provides a graphic system rendering correctness verification method based on image texture differences, which has the following core ideas: the frame rate is set to be a fixed value to ensure that the rendering result videos have the same frame number so that the rendering results are comparable, and then different display cards are used for rendering on the same platform to obtain rendering result video files, and preferably, the rendering result videos are compared frame by frame to obtain a conclusion; in the comparison process, firstly, whether the current frame contains textures or not is required to be judged, frames containing textures and frames not containing textures are respectively stored, whether the textures are different or not is required to be judged, the difference is required to be calculated and compared with a threshold value for the conditions that the textures are different or not, whether the textures are different or not is required to be further judged before whether the differences are derived from the textures is judged, if the differences are not, the result is directly calculated, if the differences are the textures, whether the differences are derived from the textures is required to be judged, when the differences are not in the textures, the result is obtained through calculation and comparison with the threshold value, and when the differences are in the textures, whether the textures are compressed textures is required to be determined, and the corresponding results are obtained through processing the compressed textures and the non-compressed textures through calculation and comparison with the threshold value; and finally, integrating the difference calculation results of the five values to obtain the integrated correctness conclusion of the rendering result.
The invention provides a graphic system rendering correctness verification method based on image texture differences, which specifically comprises the following steps:
step 1, determining a tested display card, a reference display card and a test scene, writing a rendering test program, and setting a rendering mode of the rendering test program to render at a fixed frame rate. The reference graphic card is a graphic card used as a reference, for example, an inflight graphic card.
And respectively rendering by adopting the tested display card and the reference display card under the same test scene, and then comparing the rendering results, so that the influence of platform difference on the rendering results can be eliminated, and the effectiveness of rendering correctness verification is ensured.
When the number of frames rendered is different for the same rendering test program, a large difference may occur in rendering results, so that comparability of the rendering results is lost, and therefore, only comparing the rendering test program results with the same number of frames, the rendering correctness of the display card can be verified.
The rendering effect of the fixed frame rate can be realized by using graphics APIs such as OpenGL, directX and Metal. The three APIs respectively provide cross-platform, underlying hardware access and apple device supporting capabilities, and there are differences in implementation flows. The invention realizes the fixed frame rate of the rendering process by adopting the three different graphic APIs, thereby obtaining the rendering result with the same frame number.
In the invention, the tested display card and the reference display card execute the same rendering test program on the same operation platform and are used as test scenes for testing respectively, thereby ensuring the consistency of the platform, ensuring the same total frame number of rendering results and being an important basis for the subsequent rendering correctness verification.
And 2, respectively using the tested display card and the reference display card to execute a rendering test program under the set test scene to obtain a rendering result, and respectively storing the rendering result with the fixed frame number as a tested video file and a reference video file.
Because the display cards have unique hardware architecture and performance characteristics, images generated by different display cards in the rendering process can have differences, and in order to improve the efficiency of the contrast analysis of the rendering results, the rendering results can be stored as video files. Specifically, the method can be realized by adopting a development library provided by a display card or an operating system, namely, an image sequence generated in the rendering process is converted into a video file, so that subsequent processing such as playing, comparing and analyzing is convenient.
And 3, comparing and analyzing the tested video file and the reference video file frame by frame, wherein the comparison and analysis flow is shown in fig. 1, and calculating result values under different differences are respectively obtained as frame output results. The initial values of the output results E1-E11 are all 0, and the comparison analysis of the tested video file and the reference video file of each frame comprises the following steps:
and 3.1, if the detected frame and the reference frame do not contain textures, setting the value of the output result E1 as a set value, and then executing the step 3.2, otherwise, executing the step 3.3. The measured frame is one frame of data in the measured video file, and the reference frame is one frame of data in the reference video file.
Specifically, the texture-related information in the image is usually represented as a high-frequency component in the frequency domain, and by performing fourier transform and spectral analysis on the image, the image can be converted from the spatial domain to the frequency domain, and by observing the high-frequency energy condition in the spectrogram, whether the image contains texture can be determined. The set point may be empirically set or determined based on experimental results.
And 3.2, judging whether a difference exists between the detected frame and the reference frame based on the structural similarity, if the difference exists and is larger than a threshold value, taking the difference ratio of the difference byte to the total bytes in the frame as the value of an output result E2, executing the step 3.6, and if the difference does not exist or is not larger than the threshold value, setting the value of the output result E3 to 0 and executing the step 3.6.
Firstly, judging whether a difference exists or not, and judging whether the difference exists or not by using the structural similarity. And when the difference exists or does not exist, respectively calculating to obtain a result.
Step 3.3, judging whether a difference exists between the detected frame and the reference frame based on the structural similarity, if the difference does not exist, setting the value of the output result E4 to 0, and executing the step 3.6; otherwise, respectively carrying out multiple scaling on the measured frame and the reference frame according to the set scaling ratio to obtain a plurality of scaled measured frames and scaled reference frames; and respectively comparing difference points between the scaled measured frame and the scaled reference frame under the same scaling, if the difference points are positioned in the area where the texture is positioned, executing the step 3.4, if the difference points are not positioned in the area where the texture is positioned and the difference is larger than a threshold value, taking the difference ratio value as the value of an output result E5, executing the step 3.4, and if the difference points are not positioned in the area where the texture is positioned and the difference is not larger than the threshold value, setting the value of the output result E6 to 0, and then executing the step 3.4.
Step 3.4, acquiring energy distribution of Fourier coefficients of the texture, and if the energy distribution is not uniform, judging the texture to be a compressed texture, and executing the step 3.5; otherwise, judging the texture to be non-compressed, if the difference is larger than the threshold value, taking the difference ratio as the value of the output result E7, and executing the step 3.6, and if the difference is not larger than the threshold value, setting the value of the output result E8 to be 0, and executing the step 3.6.
Specifically, the process of judging whether the texture is a compressed texture by adopting the energy analysis of the Fourier coefficient comprises the following steps:
performing a fourier transform on the image comprising the texture to obtain a fourier transformed frequency domain representation, the frequency domain representation being a set of complex numbers, wherein each complex number represents a frequency component; calculating the magnitude spectrum of the frequency domain coefficient by calculating the modulus value of each complex number, namely taking the absolute value of the complex number and the magnitude= |complex number|, wherein the magnitude spectrum represents the amplitude of the Fourier coefficient; squaring the amplitude to obtain the energy of each frequency domain coefficient; if the energy distribution of the frequency domain coefficients is uniform, the texture is indicated as non-compressed texture, otherwise, the texture is indicated as compressed texture.
For compressed textures, the energy of the high frequency fourier coefficients may be reduced or lost, thereby resulting in an uneven energy distribution of the fourier coefficients. Therefore, by analyzing the energy distribution of the frequency domain coefficients, particularly the change of the high frequency part, whether the texture is subjected to compression processing can be judged.
Step 3.5, if the difference points in the areas corresponding to the compressed textures are continuous, judging that the difference points are caused by decompression operation, performing comparison analysis according to the threshold value of the non-compressed textures, taking the difference occupation ratio as the value of an output result E9 when the difference is larger than the threshold value, otherwise, setting the value of the output result E10 to be 0 when the difference is not larger than the threshold value; if the difference point is discontinuous, indicating that the difference point is caused by a rendering error, the value of the output result E11 is set to 100.
Specifically, if the texture is subjected to compression processing, it is necessary to further determine whether the compressed texture difference is caused by decompression operation. If the difference is caused by a decompression operation, the point of difference should be continuous, rather than discrete. Because decompression operations typically restore the continuous texture prior to compression and there is no apparent separation between the difference points. And observing and analyzing whether the difference points are continuous, so that errors caused by decompression operation can be eliminated. In addition, since the current frame is greatly affected by the difference once the compressed texture is different, and the compressed texture belongs to a overrule item in the correctness checking process, the output result of the condition is set to be a larger value.
And 3.6, calculating an analysis result of each frame of comparison analysis according to the output result obtained in the process to obtain a frame output result, wherein the value of the frame output result is a difference value of the sum of a set maximum value M and all output results, namely the value of M-E1-E2-E3-E4-E5-E6-E7-E8-E9-E10-E11, the value of each output result E1-E11 is made to be 0, and the value of the difference value is made to be 0 when the difference value is a negative number. Wherein the set maximum value may be set to 100 or empirically.
And 4, repeatedly executing the step 3 to complete the frame-by-frame comparison analysis of the tested video file and the reference video file to obtain a frame output result of each frame, taking the average value of all the frame output results as a file output result of the comparison analysis of the tested video file and the reference video file, and considering that the graphic system is wrong in rendering if the file output result is greater than a threshold value, or else, considering that the graphic system is correct in rendering.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. The graphic system rendering correctness verification method based on the image texture difference is characterized by comprising the following steps of:
step 1, writing a rendering test program according to a determined tested display card, a reference display card and a test scene, and setting a rendering mode of the rendering test program to be fixed frame rate rendering;
step 2, respectively using the tested display card and the reference display card to execute a rendering test program under a set test scene to obtain a rendering result, and respectively storing the rendering result with a fixed frame number as a tested video file and a reference video file;
step 3, comparing and analyzing the tested video file and the reference video file frame by frame to obtain calculation result values under different differences as frame output results;
step 4, repeatedly executing the step 3 to complete the frame-by-frame comparison analysis of the tested video file and the reference video file to obtain frame output results of all frames, and taking the average value of the frame output results as the file output result of the comparison analysis of the tested video file and the reference video file; if the file output result is greater than the threshold value, the graphic system rendering is considered to be wrong, otherwise, the graphic system rendering is considered to be correct;
in the step 3, the frame-by-frame comparison analysis is performed on the measured video file and the reference video file, and the implementation manner of obtaining the calculated result values under different differences as the frame output results is that the initial values of the output results E1 to E11 are all 0:
step 3.1, if the detected frame and the reference frame do not contain textures, setting the value of the output result E1 as a set value, and then executing step 3.2, otherwise, executing step 3.3;
step 3.2, if the detected frame and the reference frame have differences and the differences are larger than a threshold value, taking the difference ratio of the difference bytes to the total bytes in the frame as the value of an output result E2, executing step 3.6, and if the differences do not exist or the differences are not larger than the threshold value, not changing the value of the output result E3 and executing step 3.6;
step 3.3, if no difference exists between the measured frame and the reference frame, the value of the output result E4 is not changed, and the step 3.6 is executed; otherwise, scaling the measured frame and the reference frame for a plurality of times according to a set scaling ratio to obtain a plurality of scaled measured frames and scaled reference frames, comparing difference points between the scaled measured frames and the scaled reference frames under the same scaling ratio, executing step 3.4 when the difference points are positioned in the area where the texture is positioned, executing step 3.4 when the difference points are positioned outside the area where the texture is positioned and the difference is larger than a threshold value, taking the difference occupation ratio as a value of an output result E5, and executing step 3.4 when the difference points are positioned outside the area where the texture is positioned and the difference is not larger than the threshold value, without changing the value of the output result E6;
step 3.4, if the texture is a compressed texture, executing step 3.5; otherwise, when the difference is larger than the threshold value, taking the difference ratio as the value of the output result E7, and executing the step 3.6, and when the difference is not larger than the threshold value, not changing the value of the output result E8, and executing the step 3.6;
step 3.5, if the difference point is caused by decompression operation, taking the difference ratio value as the value of the output result E9 when the difference is larger than a threshold value, and not changing the value of the output result E10 when the difference is not larger than the threshold value; if the difference point is not caused by the decompression operation, setting the value of the output result E11 to be a set maximum value;
and 3.6, calculating a frame output result of each frame according to all output results, wherein the value of the frame output result is a difference value of the sum of the set maximum value and the current value of all output results, and then the value of all output results is 0.
2. The method for verifying rendering correctness of a graphics system according to claim 1, wherein the reference graphics card is an inflorescence video card.
3. The method for verifying the rendering correctness of the graphics system according to claim 1, wherein the rendering mode of the rendering test program in step 1 is set to a fixed frame rate rendering mode: a graphics API implementation using OpenGL, directX or Metal.
4. The method for verifying rendering correctness of a graphics system according to claim 1, wherein in the step 2, the manner of respectively storing the rendering result of the fixed frame number as the tested video file and the reference video file is as follows: and converting the image sequence generated in the rendering process into a video file by adopting a development library provided by a display card or an operating system.
5. The graphics system rendering correctness verification method of claim 1 wherein the set maximum value is 100.
6. The method for verifying rendering correctness of a graphics system according to claim 1, wherein the judging manner of whether the difference exists between the measured frame and the reference frame is: and judging based on the structural similarity.
7. The method for verifying rendering correctness of a graphics system according to claim 1, wherein the determining manner of the texture in step 3.4 is: performing a fourier transform on the image comprising the texture to obtain a frequency domain representation of the image, the frequency domain representation being a set of complex numbers, each complex number representing a frequency component; calculating the amplitude spectrum of the frequency domain coefficient of each complex number, squaring the amplitude in the amplitude spectrum to obtain the energy of each frequency domain coefficient, and if the energy is uniformly distributed, obtaining the texture as a non-compressed texture, otherwise obtaining the texture as a compressed texture.
8. The method for verifying rendering correctness of a graphics system according to claim 1, wherein the determining manner of the difference point in step 3.5 caused by the decompressing operation is: if the difference points are continuous, the difference points are caused by decompression operation, otherwise, the difference points are caused by rendering errors.
CN202311335208.4A 2023-10-16 2023-10-16 Graphics system rendering correctness verification method based on image texture difference Active CN117076226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311335208.4A CN117076226B (en) 2023-10-16 2023-10-16 Graphics system rendering correctness verification method based on image texture difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311335208.4A CN117076226B (en) 2023-10-16 2023-10-16 Graphics system rendering correctness verification method based on image texture difference

Publications (2)

Publication Number Publication Date
CN117076226A CN117076226A (en) 2023-11-17
CN117076226B true CN117076226B (en) 2023-12-29

Family

ID=88713752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311335208.4A Active CN117076226B (en) 2023-10-16 2023-10-16 Graphics system rendering correctness verification method based on image texture difference

Country Status (1)

Country Link
CN (1) CN117076226B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012140360A1 (en) * 2011-04-12 2012-10-18 Real Fusio France Method and system for rendering a virtual scene in three dimensions
CN114708370A (en) * 2022-03-29 2022-07-05 北京麟卓信息科技有限公司 Method for detecting graphics rendering mode of Linux platform
CN115955590A (en) * 2022-12-30 2023-04-11 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and medium
CN116185743A (en) * 2023-04-24 2023-05-30 芯瞳半导体技术(山东)有限公司 Dual graphics card contrast debugging method, device and medium of OpenGL interface
CN116302764A (en) * 2023-05-22 2023-06-23 北京麟卓信息科技有限公司 Texture filling rate testing method based on minimum data filling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012140360A1 (en) * 2011-04-12 2012-10-18 Real Fusio France Method and system for rendering a virtual scene in three dimensions
CN114708370A (en) * 2022-03-29 2022-07-05 北京麟卓信息科技有限公司 Method for detecting graphics rendering mode of Linux platform
CN115955590A (en) * 2022-12-30 2023-04-11 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and medium
CN116185743A (en) * 2023-04-24 2023-05-30 芯瞳半导体技术(山东)有限公司 Dual graphics card contrast debugging method, device and medium of OpenGL interface
CN116302764A (en) * 2023-05-22 2023-06-23 北京麟卓信息科技有限公司 Texture filling rate testing method based on minimum data filling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于概率模型检验的云渲染任务调度定量验证;高洪皓;缪淮扣;刘浩宇;许华虎;于芷若;;软件学报(06);全文 *
视频图像在三维模型中的渲染;赵凯;全春来;艾飞;周翔;王戈;;计算机工程与设计(22);全文 *

Also Published As

Publication number Publication date
CN117076226A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
US5940124A (en) Attentional maps in objective measurement of video quality degradation
CN109447154B (en) Picture similarity detection method, device, medium and electronic equipment
US6888564B2 (en) Method and system for estimating sharpness metrics based on local edge kurtosis
JP6961139B2 (en) An image processing system for reducing an image using a perceptual reduction method
Herzog et al. NoRM: No‐reference image quality metric for realistic image synthesis
Chen et al. Effects of compression on remote sensing image classification based on fractal analysis
US20210073675A1 (en) System and method to improve accuracy of regression models trained with imbalanced data
CN112037223B (en) Image defect detection method and device and electronic equipment
CN114374760A (en) Image testing method and device, computer equipment and computer readable storage medium
CN114968743A (en) Abnormal event monitoring method, device, equipment and medium
CN117076226B (en) Graphics system rendering correctness verification method based on image texture difference
JP3520087B2 (en) Spatial filtering method and means
CN108764112A (en) A kind of Remote Sensing Target object detecting method and equipment
CN115205163B (en) Method, device and equipment for processing identification image and storage medium
CN116484184A (en) Method and device for enhancing partial discharge defect sample of power equipment
CN108416770B (en) Image quality evaluation method based on visual saliency
CN116309364A (en) Transformer substation abnormal inspection method and device, storage medium and computer equipment
CN111597957B (en) Transformer winding fault diagnosis method based on morphological image processing
Feng et al. A new mesh visual quality metric using saliency weighting-based pooling strategy
CN114764949A (en) Living body detection method and device
CN110473183B (en) Evaluation method, device, electronic equipment and medium for visible light full-link simulation image
US9953393B2 (en) Analyzing method and analyzing system for graphics process of graphic application program
Noonan et al. Temporal Coherence Predictor for Time Varying Volume Data Based on Perceptual Functions.
Dong et al. Objective visual quality assessment for 3D meshes
CN117437178A (en) Image definition measuring method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant