CN114418972A - Picture quality detection method, device, equipment and storage medium - Google Patents

Picture quality detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114418972A
CN114418972A CN202210010451.8A CN202210010451A CN114418972A CN 114418972 A CN114418972 A CN 114418972A CN 202210010451 A CN202210010451 A CN 202210010451A CN 114418972 A CN114418972 A CN 114418972A
Authority
CN
China
Prior art keywords
image
target
application
picture quality
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210010451.8A
Other languages
Chinese (zh)
Inventor
黄超
梅维一
周洪斌
严明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210010451.8A priority Critical patent/CN114418972A/en
Publication of CN114418972A publication Critical patent/CN114418972A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses a picture quality detection method, a picture quality detection device, picture quality detection equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring operation indication information; respectively operating the reference application and the target application according to the operation indication information to obtain a reference image and a target image, wherein the reference image is generated in the operation process by the reference application, the target image is generated in the operation process by the target application, and the reference application and the target application are different versions of the same application; and comparing the target image with the reference image to obtain a picture quality detection result of the target image, wherein the picture quality detection result is used for expressing the picture quality of the target application. And running the reference application and the target application according to the running indication information, automatically generating a reference image and a target image, and obtaining a picture quality detection result of the target image by taking the reference image as a comparison standard.

Description

Picture quality detection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a picture quality detection method, a picture quality detection device, picture quality detection equipment and a storage medium.
Background
With the development of computer technology, a variety of applications, such as game applications, social applications, video applications, and the like, have gradually emerged. In order to ensure that the application has good display performance, the picture quality of the application needs to be checked.
In the related art, a tester evaluates the picture quality of the application by checking the picture displayed by the application, thereby manually completing the picture quality detection. However, this method is excessively dependent on human subjective feeling, resulting in low accuracy of picture quality detection.
Disclosure of Invention
The embodiment of the application provides a picture quality detection method, a picture quality detection device, picture quality detection equipment and a storage medium, and can improve the accuracy of picture quality detection. The technical scheme is as follows:
in one aspect, a method for detecting picture quality is provided, the method including:
acquiring operation indication information;
respectively operating a reference application and a target application according to the operation indication information to obtain a reference image and a target image, wherein the reference image is generated in the operation process of the reference application, the target image is generated in the operation process of the target application, and the reference application and the target application are different versions of the same application;
and comparing the target image with the reference image to obtain a picture quality detection result of the target image, wherein the picture quality detection result is used for representing the picture quality of the target application.
Optionally, after the picture quality detection result includes a plurality of quality levels and a score corresponding to each quality level, where the score corresponding to the quality level indicates a possibility that the picture quality of the target image belongs to the quality level, and the comparing the target image with the reference image obtains the picture quality detection result of the target image, the method further includes:
and determining the quality grade corresponding to the highest score in the image quality detection result as the quality grade to which the image quality of the target image belongs.
In another aspect, there is provided a picture quality detection apparatus, the apparatus comprising:
the information acquisition module is used for acquiring operation indication information;
the image generation module is used for respectively operating a reference application and a target application according to the operation indication information to obtain a reference image and a target image, wherein the reference image is generated in the operation process of the reference application, the target image is generated in the operation process of the target application, and the reference application and the target application are different versions of the same application;
and the picture quality detection module is used for comparing the target image with the reference image to obtain a picture quality detection result of the target image, and the picture quality detection result is used for representing the picture quality of the target application.
Optionally, the operation instruction information includes an operation identifier, and the image generation module includes:
a first generating unit, configured to execute the operation indicated by the operation identifier in the reference application, and perform screenshot on the obtained picture to obtain the reference image;
and the second generating unit is used for executing the operation indicated by the operation identifier in the target application and carrying out screenshot on the obtained picture to obtain the target image.
Optionally, the reference application and the target application include a virtual scene, the operation identifier is used to indicate an operation performed by a virtual object in the virtual scene, and the first generating unit is configured to, in the reference application, control the virtual object in the virtual scene to perform the operation indicated by the operation identifier, and capture an obtained screen to obtain the reference image;
and the second generating unit is configured to, in the target application, control the virtual object in the virtual scene to execute the operation indicated by the operation identifier, and capture an obtained screen to obtain the target image.
Optionally, the operation indication information includes a plurality of operation time points and an operation identifier corresponding to at least one operation time point;
the first generating unit is configured to execute, in the reference application, an operation indicated by the operation identifier corresponding to the operation time point each time the operation time point is reached, and capture an obtained picture to obtain a reference image corresponding to the operation time point;
and the second generating unit is used for executing the operation indicated by the operation identifier corresponding to the operation time point every time one operation time point is reached in the target application, and capturing the obtained picture to obtain the target image corresponding to the operation time point.
Optionally, the apparatus further comprises:
and the information generating module is used for responding to an operation instruction in the reference application, executing an operation corresponding to the operation instruction in the reference application and generating the running indication information, wherein the running indication information comprises an operation identifier corresponding to the operation.
Optionally, the reference application and the target application include a virtual scene, the running indication information includes a position of a virtual object in the virtual scene, and the image generation module includes:
the first generation unit is used for controlling the virtual object in the virtual scene to be displayed at the position in the reference application, and capturing the obtained picture to obtain the reference image;
and the second generation unit is used for controlling the virtual object in the virtual scene to be displayed at the position in the target application, and capturing the obtained picture to obtain the target image.
Optionally, the operation instruction information includes a plurality of display time points and a position corresponding to each display time point;
the first generating unit is configured to control the virtual object to be displayed at a position corresponding to the display time point each time when the display time point is reached in the reference application, and capture an obtained picture to obtain a reference image corresponding to the display time point;
and the second generating unit is used for controlling the virtual object to be displayed at a position corresponding to the display time point each time one display time point is reached in the target application, and capturing the obtained picture to obtain a target image corresponding to the display time point.
Optionally, the running indication information further includes a target operation identifier, where the target operation identifier is used to indicate an operation performed by the virtual object at the location;
the first generating unit is configured to control the virtual object to be displayed at the position in the reference application, control the virtual object to execute the operation indicated by the target operation identifier at the position, and capture an obtained picture to obtain the reference image;
and the second generating unit is used for controlling the virtual object to be displayed at the position in the target application, controlling the virtual object to execute the operation indicated by the target operation identifier at the position, and capturing the obtained picture to obtain the target image.
Optionally, the apparatus further comprises:
the information generation module is used for responding to an operation instruction in the reference application, and controlling the virtual object in the virtual scene to execute an operation corresponding to the operation instruction in the reference application;
the information generating module is further configured to acquire a position of the virtual object in the virtual scene, and generate the operation instruction information, where the operation instruction information includes the position.
Optionally, the image generation module includes:
the image pair generating unit is used for respectively operating the reference application and the target application according to the operation indication information to obtain image pairs respectively corresponding to a plurality of time points, wherein the image pairs comprise candidate reference images and candidate target images, the candidate reference images are generated by the reference application at the time points, and the candidate target images are generated by the target application at the time points;
a similarity determination unit for determining a similarity between the candidate reference image and the candidate target image in each of the image pairs;
the image pair screening unit is used for screening out image pairs with corresponding similarity meeting target conditions in a plurality of image pairs;
and the image acquisition unit is used for acquiring the reference image and the target image from the screened image pair.
Optionally, the similarity determining unit is configured to:
dividing the candidate reference image into a plurality of reference image blocks, dividing the candidate target image into a plurality of target image blocks, wherein the reference image blocks correspond to the target image blocks one to one;
determining the similarity between each reference image block and the corresponding target image block;
determining a minimum similarity among the plurality of similarities as a similarity between the candidate reference image and the candidate target image.
Optionally, the picture quality detection module includes:
the image dividing unit is used for dividing the reference image into a plurality of reference image blocks and dividing the target image into a plurality of target image blocks, wherein the target image blocks correspond to the reference image blocks one by one;
the image block comparison unit is used for comparing the plurality of target image blocks with the corresponding reference image blocks to obtain the image quality detection results of the plurality of target image blocks;
and the fusion unit is used for fusing the image quality detection results of the target image blocks to obtain the image quality detection result of the target image.
Optionally, the picture quality detection module includes:
and the model calling unit is used for calling a picture quality detection model, comparing the target image with the reference image and obtaining the picture quality detection result.
Optionally, the picture quality detection model includes a feature extraction network, a feature fusion network, and a feature detection network, and the model invoking unit is configured to:
calling the feature extraction network, and performing feature extraction on the reference image and the target image to obtain a first picture quality feature of the reference image and a second picture quality feature of the target image;
calling the feature fusion network to fuse the first picture quality feature, the second picture quality feature and the difference feature between the first picture quality feature and the second picture quality feature to obtain a fusion feature;
and calling the feature detection network to detect the fusion feature to obtain the picture quality detection result.
Optionally, the feature extraction network includes a first extraction layer and a second extraction layer, and the model invoking unit is configured to:
calling the first extraction layer to extract the features of the reference image to obtain the first picture quality features;
and calling the second extraction layer to extract the features of the target image to obtain the second picture quality features.
Optionally, the apparatus further comprises a model training module configured to:
acquiring a sample reference image, a sample target image and a sample picture quality detection result of the sample target image;
calling the picture quality detection model, and comparing the sample target image with the sample reference image to obtain a picture quality detection result of the sample target image;
and training the picture quality detection model based on the picture quality detection result of the sample target image and the sample picture quality detection result.
Optionally, the picture quality detection result includes a plurality of quality levels and a score corresponding to each quality level, where the score corresponding to the quality level indicates a possibility that the picture quality of the target image belongs to the quality level, and the apparatus further includes:
and the quality grade determining module is used for determining the quality grade corresponding to the highest score in the image quality detection result as the quality grade to which the image quality of the target image belongs.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one computer program is stored in the memory, and the at least one computer program is loaded by the processor and executed to implement the operations performed by the picture quality detection method according to the above aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the operations performed by the picture quality detection method according to the above aspect.
In another aspect, a computer program product is provided, which comprises a computer program that is loaded and executed by a processor to perform the operations performed by the picture quality detection method according to the above aspect.
According to the method, the device, the equipment and the storage medium provided by the embodiment of the application, in order to detect the picture quality of the target application, the reference application which belongs to the same application as the target application but belongs to a different version is introduced, the reference application and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and the picture quality detection result of the target image is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a picture quality detection method provided in an embodiment of the present application;
fig. 3 is a flowchart of another picture quality detection method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a reference image provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of another reference image provided by an embodiment of the present application;
fig. 6 is a flowchart of another picture quality detection method provided in the embodiment of the present application;
fig. 7 is a flowchart of another picture quality detection method provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a reference image and a target image provided by an embodiment of the present application;
fig. 9 is a flowchart of a further picture quality detection method provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a picture quality detection model provided in an embodiment of the present application;
fig. 11 is a flowchart of a further picture quality detection method provided in an embodiment of the present application;
fig. 12 is a flowchart of a method for training a picture quality detection model according to an embodiment of the present application;
fig. 13 is a flowchart of a further picture quality detection method provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of a picture quality detection apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of another picture quality detection apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
It will be understood that the terms "first," "second," and the like as used herein may be used herein to describe various concepts, which are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first picture quality feature may be referred to as a second picture quality feature, and similarly, a second picture quality feature may be referred to as a first picture quality feature, without departing from the scope of the present application.
For example, at least one time point may be any integer number of time points greater than or equal to one, such as one time point, two time points, three time points, and the like. The plurality of points in time may be two or more, and the plurality of points in time may be any integer number of points in time of two or more, such as two points in time, three points in time, or the like. Each refers to each of at least one, for example, each time point refers to each of a plurality of time points, and if the plurality of time points is 3 time points, each time point refers to each of the 3 time points.
It is understood that, in the embodiments of the present application, related data such as user information or operation instruction information is involved, when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and the like.
Machine Learning (ML) is a multi-domain cross discipline, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. a special study on how computers simulate or implement human Learning behaviors to obtain new knowledge or skills, reorganizing existing knowledge structures to continuously improve their performance.
Computer Vision technology (CV) is a science for researching how to make a machine "look", and more specifically, it refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D (3-Dimension) technology, virtual reality, augmented reality, synchronous positioning and map construction, automatic driving, smart transportation and other technologies, and also includes common human face Recognition, fingerprint Recognition and other biological feature Recognition technologies.
The quality detection method provided by the embodiment of the present application will be described below based on an artificial intelligence technique and a computer vision technique.
The picture quality detection method provided by the embodiment of the application is executed by computer equipment. The computer device can have installed thereon a reference application or a target application, the reference application and the target application being different versions of the same application. The computer equipment is used for respectively operating the reference application and the target application according to the operation indication information to obtain a reference image and a target image, and comparing the target image with the reference image to obtain a picture quality detection result of the target image. Since the target image is an image generated by a target application, the picture quality detection result can represent the picture quality of the target application.
In one possible implementation, the computer device is a terminal, for example, the terminal is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, or a vehicle-mounted terminal. Or, the computer device is a server, which may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, CDN (Content Delivery Network), big data, and an artificial intelligence platform.
In one possible implementation, the reference application and the target application are game applications, video applications, content sharing applications or other types of applications, and the like, and the reference application and the target application can provide game functions, video playing functions, comment functions, shopping functions or navigation functions, and the like. Optionally, the reference application and the target application are applications in an operating system of the computer device, or applications provided by a third party.
In one possible implementation, as shown in fig. 1, the computer device includes a first device 101 and a second device 102. The first device 101 and the second device 102 may be directly or indirectly connected through a wired or wireless communication manner, for example, the first device 101 is a terminal, the second device 102 is a server, or both the first device 101 and the second device 102 are terminals, which is not limited herein in this embodiment of the application.
The first device 101 can install a reference application and a target application, and the first device 101 is configured to run the reference application and the target application according to the running instruction information, obtain a reference image and a target image, and send the reference image and the target image to the second device 102. The second device 102 is configured to compare the received target image with the reference image to obtain a picture quality detection result of the target image.
The picture quality detection method provided by the embodiment of the application can be applied to scenes for detecting the picture quality of any application.
For example, in a scenario where the picture quality of a game application is detected. The same game application includes a plurality of different versions, wherein the reference game application is a qualified version that has passed the picture quality test, and the target game application is a newly developed version to be tested that has not been subjected to the picture quality test. In order to detect the picture quality of a target game application, game instruction information is acquired, game match is respectively carried out in a reference game application and the target game application according to the game instruction information, a reference game image and a target game image which comprise game pictures are obtained, then the target game image is compared with the reference game image, the picture quality detection result of the target game image is obtained, and the picture quality detection result can represent the picture quality of the target game application because the target game image is an image generated by the target game application, so that the picture quality detection of the target game application is realized.
As another example, it is applied in a scene where the picture quality of a video application is detected. The same video application includes a plurality of different versions, wherein the reference video application is a qualified version that has passed picture quality detection, and the target video application is a newly developed version to be tested that has not been subjected to picture quality detection. In order to detect the picture quality of a target video application, playing indication information is acquired, the same video is played in a reference video application and the target video application respectively according to the playing indication information, a reference video image and a target video image comprising video pictures are obtained, then the target video image is compared with the reference video image, and the picture quality detection result of the target video image is obtained.
In addition, the picture quality detection method provided in the embodiment of the present application may also be applied to any other scene in which the picture quality of an application needs to be detected, for example, to a scene in which picture quality detection is performed on other types of applications such as a content sharing application or a navigation application, which is not limited in the embodiment of the present application.
Fig. 2 is a flowchart of a picture quality detection method according to an embodiment of the present application. The embodiment of the application is executed by a computer device, and referring to fig. 2, the method comprises the following steps:
201. the computer device obtains operation instruction information.
In the embodiment of the application, a reference application or a target application can be installed in the computer equipment, and the reference application and the target application are different versions of the same application. For example, the reference application is a historical version that has passed picture quality detection, the picture quality of the reference application may be used as a reference standard, and the target application is a newly developed version that has not passed picture quality detection, that is, the target application is a version to be subjected to picture quality detection. The reference application and the target application may be various types of applications, for example, the reference application and the target application are a game application, a video application, or a content sharing application.
The computer device acquires operation instruction information for instructing how to operate the reference application or the target application. The running indication information may include various types of information, for example, the reference application and the target application are game applications, and the running indication information may include information for indicating to control the virtual object, for example, the reference application and the target application are video applications, and the running indication information may include information for indicating to play a certain video, and the like. The generation manner of the operation instruction information may include multiple manners, for example, the operation instruction information is generated according to a historical operation record, or the operation instruction information is an operation script written by a tester, and the like, which is not limited in this embodiment of the application.
202. And the computer equipment respectively runs the reference application and the target application according to the running indication information to obtain a reference image and a target image.
After the computer equipment acquires the operation indication information, the reference application is operated according to the operation indication information to obtain a reference image, and the reference image is generated by the reference application in the operation process. And the computer equipment runs the target application according to the running indication information to obtain a target image, wherein the target image is generated by the target application in the running process.
Wherein, because the reference image is generated by the reference application during the running process, the picture quality of the reference image can reflect the picture quality of the reference application, and the reference image can be used as a reference standard for picture quality detection. Since the target image is generated by the target application during running, the picture quality of the target image can reflect the picture quality of the target application.
In the embodiment of the application, the reference image used as the reference standard can be automatically generated by running the indication information, the target image used for detecting the picture quality can be automatically generated, and an automatic mode for generating the reference image and the target image is provided.
It should be noted that, because the reference application and the target application are the same application of different versions, in a possible implementation manner, the computer device first installs the reference application, runs the reference application according to the running instruction information to obtain the reference image, then uninstalls the reference application, then installs the reference application, and runs the target application according to the running instruction information to obtain the target image. Or, the computer device may further install the target application first, execute the target application according to the operation instruction information to obtain the target image, then uninstall the target application, then install the reference application, and execute the reference application according to the operation instruction information to obtain the reference image. Or, in another possible implementation manner, in a case that the computer device is capable of installing the reference application and the target application at the same time, both the reference application and the target application may also be installed in the computer device, so that the computer device runs the reference application and the target application according to the running indication information, respectively, to obtain the reference image and the target image.
203. And the computer equipment compares the target image with the reference image to obtain a picture quality detection result of the target image.
Since the reference image generated by the reference application can be used as a reference standard for picture quality detection, the computer device can obtain a picture quality detection result of the target image by comparing the target image with the reference image. For example, the picture quality of the reference image is a standard picture quality, and the computer device determines the difference between the target image and the reference image by comparing the target image with the reference image, wherein the larger the difference between the target image and the reference image is, the larger the difference between the picture quality representing the target image and the standard picture quality is, and the smaller the difference between the target image and the reference image is, the smaller the difference between the picture quality representing the target image and the standard picture quality is.
Wherein, since the target image is generated by the target application during the running process, the picture quality detection result of the target image is also used for representing the picture quality of the target application.
In the method provided by the embodiment of the application, in order to detect the picture quality of the target application, reference applications which belong to the same application as the target application but belong to different versions are introduced, the reference applications and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and a picture quality detection result of the target image is obtained.
Fig. 3 is a flowchart of another picture quality detection method according to an embodiment of the present application. The embodiment of the application is executed by a computer device, and referring to fig. 3, the method comprises the following steps:
301. the computer equipment responds to the operation instruction in the reference application, executes the operation corresponding to the operation instruction in the reference application and generates running indication information, wherein the running indication information comprises an operation identifier corresponding to the operation.
The computer device can be installed with a reference application or a target application, the running instruction information is used for instructing how to run the reference application or the target application, and in order to generate the running instruction information, the reference application can be run under the operation of a tester, so that the operation executed in the running process of the reference application is recorded as the running instruction information.
Therefore, the tester executes the operation in the reference application of the computer device, the computer device generates the operation instruction corresponding to the operation after detecting the operation of the tester, and executes the operation corresponding to the operation instruction in response to the operation instruction, so as to generate the operation indication information including the operation identifier corresponding to the operation, and then the reference application or the target application can be executed according to the operation indicated by the operation identifier in the operation indication information. Alternatively, the operation identifier may be an identifier of an operation function corresponding to the operation, and the operation function is used to control the operation to be performed in the reference application. For example, the operation identifier is the name of the operation function corresponding to the operation.
For example, the reference application is a social application, the tester executes a click operation in the reference application, the operation instruction corresponding to the click operation is a message sending instruction, the computer device responds to the message sending instruction, sends a message in the reference application, and generates operation instruction information including a message sending identifier, and the message sending identifier is used for instructing an operation of sending the message. For another example, the reference application is a game application, the tester performs a sliding operation in the reference application, the operation instruction corresponding to the sliding operation is a movement instruction, the computer device controls the movement of the virtual object in the reference application in response to the movement instruction, and generates the operation instruction information including a movement identifier, where the movement identifier is used to instruct an operation of controlling the movement of the virtual object.
In a possible implementation manner, after the computer device generates the operation indication information, the operation indication information is stored locally, so that the operation indication information can be directly acquired when the operation indication information needs to be used later. In another possible implementation manner, after the computer device generates the operation instruction information, the computer device may further send the operation instruction information to a server, so that when other subsequent devices need to use the operation instruction information, the computer device requests the server for the operation instruction information.
It should be noted that, in the embodiment of the present application, a process of generating operation indication information including an operation identifier corresponding to an operation is described as an example only with one operation, and in another embodiment, a computer device may sequentially respond to a plurality of operation instructions in a reference application, and each time an operation corresponding to an operation instruction is executed in the reference application, the operation identifier corresponding to the operation is added to the operation indication information, so that the operation indication information includes a plurality of operation identifiers arranged in sequence.
It should be noted that, in the embodiment of the present application, only the process of generating the operation indication information is described by taking the reference application as an example. In another embodiment, since it is only necessary to ensure that the generated operation indication information can indicate how the reference application and the target application operate, the computer device may also generate the operation indication information by operating other versions of applications belonging to the same application as the reference application, for example, the computer device generates the operation indication information by operating the target application.
It should be noted that, in the embodiment of the present application, the operation instruction information is only described as an example that the computer device automatically generates the operation instruction information, in another embodiment, the operation instruction information may also be an operation script written by a tester, and the tester uploads the operation script to the computer device, so that the computer device does not need to execute the step 301.
302. The computer device obtains operation instruction information.
When the reference application or the target application needs to be run according to the running indication information, the computer equipment acquires the running indication information. Optionally, the operation instruction information is stored in the computer device, and the computer device may directly obtain the operation instruction information locally. Optionally, the operation instruction information is stored in other devices, and the computer device requests the other devices for the operation instruction information.
In one possible implementation, the computer device is installed with a reference application, and in response to an automatic execution instruction in the reference application, the computer device acquires the execution instruction information so as to automatically execute the reference application according to the execution instruction information. Or, the computer device is provided with a target application, and in response to an automatic running instruction in the target application, the computer device acquires the running instruction information so as to automatically run the target application according to the running instruction information.
303. And the computer equipment executes the operation indicated by the operation identification in the running indication information in the reference application, and captures the obtained picture to obtain a reference image.
The method comprises the steps that a reference application is installed on computer equipment, after running instruction information is obtained, the computer equipment obtains an operation identifier in the running instruction information, the operation indicated by the operation identifier is determined, the operation is executed in the reference application, so that a picture when the operation is executed is displayed in the reference application, the picture is captured, and a reference image is obtained.
In the embodiment of the application, the operation indicated by the operation identifier can be automatically executed in the reference application by running the operation identifier in the indication information, so that the reference image used as the reference standard is automatically generated, the reference application does not need to be manually operated by a user, and an automatic mode for generating the reference image is provided.
For example, the reference application is a social application, the operation indicated by the operation identifier is an operation of sending a message, and the computer device sends the message in the reference application, so that a picture when the message is sent is displayed in the reference application, and a screenshot is performed on the picture to obtain the reference image.
In another possible implementation, the reference application includes a virtual scene, and the operation identifies an operation for instructing a virtual object in the virtual scene to perform. And the computer equipment controls the virtual object in the virtual scene to execute the operation indicated by the operation identifier in the reference application, and captures the obtained picture to obtain a reference image.
For example, the reference application is a game application, the virtual scene in the reference application is a game scene, the virtual object in the virtual scene is a game character, and the operation identifier is used for instructing the game character in the game scene to perform a shooting operation. The computer device controls the game character in the game scene to execute a shooting operation in the game application, so that a game picture of the game character when shooting is displayed in the game application, and the screenshot is performed on the game picture to obtain the reference image.
In another possible implementation manner, the operation indication information includes a plurality of operation time points and an operation identifier corresponding to at least one operation time point. And executing the operation indicated by the operation identifier corresponding to the operation time point each time the computer equipment reaches one operation time point in the reference application, and capturing the obtained picture to obtain the reference image corresponding to the operation time point.
The operation time point is a time point of performing an operation in the reference application, for example, the number of the operation time points is 10000, which are 0.1 th second, 0.2 th second and 0.3 th second respectively, and up to 1000 th second, that is, the total time for running the reference application is 1000 seconds. In the multiple operation time points, at least one operation time point corresponds to an operation identifier, and the operation identifier corresponding to the operation time point is used for indicating the operation executed at the operation time point, so that the computer device executes the operation indicated by the operation identifier when the currently arrived operation time point corresponds to the operation identifier in the reference application, and does not need to execute the operation when the operation time point does not correspond to the operation identifier, displays a picture corresponding to the operation time point in the reference application, captures the picture to obtain a reference image corresponding to the operation time point, and finally can obtain the reference image corresponding to each operation time point, thereby obtaining multiple reference images.
For example, when the reference application is a game application, the 3 rd operation time point corresponds to an operation identifier, the operation identifier is used for indicating a virtual object in a virtual scene to perform a jump operation, the 4 th to 9 th operation time points do not correspond to the operation identifier, the computer device controls the virtual object to perform the jump operation when the 3 rd operation time point is reached, the 4 th operation time point does not correspond to the operation identifier when the 4 th operation time point is reached, and the jump operation is not completed yet, the computer device continues to control the virtual object to perform the jump operation, and so on until the virtual object completes the jump operation. For example, when the 9 th operation time point is reached, the 9 th operation time point is not corresponding to the operation identifier, and the jump operation is completed, the computer device may control the virtual object to maintain the current state without changing.
Fig. 4 is a schematic diagram of a reference image provided in an embodiment of the present application, and as shown in fig. 4, the reference image is a reference image corresponding to an nth operation time point, n is a positive integer, the reference image includes a virtual object 401, and the virtual object 401 is located at a first position in a virtual scene. When the operation identifier corresponding to the (n + 1) th operation time point is reached, the operation identifier corresponding to the (n + 1) th operation time point indicates a moving operation, and the computer device controls the virtual object 401 to move, so that the virtual object 401 moves from the first position to the second position, and captures an obtained picture to obtain a reference image corresponding to the (n + 1) th operation time point. The reference image corresponding to the (n + 1) th operation time point is shown in fig. 5, where the virtual object 401 in the reference image is located at the second position in the virtual scene.
304. And the computer equipment executes the operation indicated by the operation identification in the operation indication information in the target application, and captures the obtained picture to obtain a target image.
The method comprises the steps that a target application is installed on computer equipment, after running instruction information is obtained, the computer equipment obtains an operation identifier in the running instruction information, the operation indicated by the operation identifier is determined, the operation is executed in the target application, so that a picture when the operation is executed is displayed in the target application, the picture is captured, and a target image is obtained.
In the embodiment of the application, the operation indicated by the operation identifier can be automatically executed in the target application by running the operation identifier in the indication information, so that the target image for detecting the picture quality is automatically generated, a user does not need to manually operate the target application, and an automatic mode for generating the target image is provided.
And because the reference application and the target application belong to different versions of the same application, the reference application and the target application are respectively run directly according to the same running indication information to generate a reference image and a target image, so that the content of the reference image is similar to that of the target image, thereby ensuring the consistency of the content of the reference image and the content of the target image, and ensuring the difference between the target image and the reference image in the picture quality so as to detect the picture quality of the target image by taking the reference image as a reference standard.
In one possible implementation, the reference application and the target application include a virtual scene, and the operation identification is used for indicating an operation executed by a virtual object in the virtual scene. And the computer equipment controls the virtual object in the virtual scene to execute the operation indicated by the operation identifier in the target application, and captures the obtained picture to obtain the target image.
In another possible implementation manner, the operation indication information includes a plurality of operation time points and an operation identifier corresponding to at least one operation time point. And the computer equipment executes the operation indicated by the operation identifier corresponding to the operation time point when reaching one operation time point each time in the target application, and captures the obtained picture to obtain a target image corresponding to the operation time point.
In the embodiment of the application, under the condition that the operation indication information includes a plurality of operation time points, the computer device generates a reference image corresponding to each operation time point and a target image corresponding to each operation time point respectively, so that the reference images and the target images are in one-to-one correspondence, and the image quality of the target images can be detected respectively according to the one-to-one correspondence reference images.
The specific process of generating the target image in step 304 is the same as the specific process of generating the reference image in step 303, and is not described in detail herein.
305. The computer device divides the reference image into a plurality of reference image blocks, divides the target image into a plurality of target image blocks, and the target image blocks correspond to the reference image blocks one to one.
The computer device divides the reference image into a plurality of reference image blocks, and divides the target image into a plurality of target image blocks, for example, 100 or 200 target image blocks, and the like. And the target image block at a certain position in the target image corresponds to the reference image block at the position in the reference image block.
For example, the computer device divides the reference image block into 10 x 10 grid regions, each grid region being determined as one reference image block. The computer device divides the target image block into 10 x 10 grid regions, each grid region being determined as one target image block.
306. And the computer equipment compares the plurality of target image blocks with the corresponding reference image blocks to obtain the image quality detection results of the plurality of target image blocks.
Since the position of the target image block in the target image is the same as the position of the reference image block corresponding to the target image block in the reference image, and the content of the target image is similar to the content of the reference image, the content of the target image block is similar to the content of the corresponding reference image block, and the difference between the target image block and the reference image block is the image quality, in order to detect the image quality of the target image block, the target image block and the corresponding reference image block can be compared by using the reference image block as a reference standard for image quality detection, so that the image quality detection result of the target image block is determined according to the difference between the target image block and the reference image block, and the image quality detection result is used for representing the image quality of the target image block.
In one possible implementation manner, the computer device selects a target number of target image blocks from a plurality of target image blocks, and compares the target number of target image blocks with corresponding reference image blocks to obtain a picture quality detection result of the target number of target image blocks. For example, the total number of target image blocks is 100, and the computer device randomly selects 20 image blocks from the total number of target image blocks for picture quality detection.
In the embodiment of the application, the reference image and the target image are respectively divided into image blocks, and the image quality detection is performed by taking the image blocks as units.
307. And the computer equipment fuses the image quality detection results of the plurality of target image blocks to obtain the image quality detection result of the target image.
And after obtaining the picture quality detection results of the target image blocks, the computer equipment fuses the picture quality detection results of the target image blocks so as to obtain the picture quality detection result of the target image. Wherein, since the target image is generated by the target application during the running process, the picture quality detection result of the target image is also used for representing the picture quality of the target application. In the embodiment of the application, the application picture quality is considered to be an important factor influencing user experience, and the reference image generated by the reference application is used as the reference standard of the picture quality detection to detect the picture quality of the target image generated by the target application, so that the method for detecting the application picture quality with reference is provided, and the accuracy of detecting the picture quality is facilitated.
In a possible implementation manner, the picture quality detection result of the target image block is a quality detection value, the quality detection value is used for representing the picture quality of the target image block, the larger the quality detection value is, the better the picture quality of the target image block is represented, and the smaller the quality detection value is, the worse the picture quality of the target image block is identified. And the computer equipment fuses the quality detection values of the plurality of target image blocks to obtain the quality detection value of the target image. For example, the computer device determines an average value of the quality detection values of the plurality of target image blocks as the quality detection value of the target image, or determines a median value of the quality detection values of the plurality of target image blocks as the quality detection value of the target image, or determines a minimum value of the quality detection values of the plurality of target image blocks as the quality detection value of the target image, and the like, which is not limited in the embodiment of the present application.
In another possible implementation, the picture quality detection result includes a plurality of quality levels and a score corresponding to each quality level, and the score corresponding to a quality level indicates a likelihood that the picture quality of the target image belongs to the quality level. The computer device determines the quality grade corresponding to the highest score in the picture quality detection results as the quality grade to which the picture quality of the target image belongs after obtaining the picture quality detection results of the target image.
For example, the quality ranks are classified into 5 points (good picture quality), 4 points (good picture quality), 3 points (general picture quality), 2 points (poor picture quality), and 1 point (poor picture quality). The picture quality detection result includes a score corresponding to each quality level, for example, the scores corresponding to the above five quality levels are 0.02, 0.9, 0.05, 0.02 and 0.01, respectively, and then the computer device takes the quality level 4 with the largest corresponding score as the quality level to which the picture quality of the target image belongs, that is, the picture quality of the target image is better.
It should be noted that, in the embodiments of the present application, only the picture quality detection is performed on one target image according to the corresponding reference image as an example. In another embodiment, the computer device obtains a target image and a reference image corresponding to the multiple operation time points, performs picture quality detection on the target image according to the corresponding reference image for each target image, thereby obtaining a picture quality detection result of the target image corresponding to each operation time point, and subsequently fuses the picture quality detection results of the multiple target images to obtain a picture quality detection result of the target application.
The image quality detection results of the target images corresponding to each operation time point can be obtained, so that the operation time points corresponding to the plurality of continuous target images can be directly determined under the condition that the image quality detection results of the plurality of continuous target images are abnormal, the reason for generating the abnormality is inquired at the plurality of operation time points, and the accurate positioning of the problem of the abnormal image quality of the target application is facilitated.
In the method provided by the embodiment of the application, in order to detect the picture quality of the target application, reference applications which belong to the same application as the target application but belong to different versions are introduced, the reference applications and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and a picture quality detection result of the target image is obtained.
And by running the operation identifier in the instruction information, the operation instructed by the operation identifier can be automatically executed in the reference application, so that the reference image used as the reference standard is automatically generated, the reference application does not need to be manually operated by a user, and an automatic mode for generating the reference image is provided.
And by running the operation identifier in the instruction information, the operation instructed by the operation identifier can be automatically executed in the target application, so that the target image for detecting the picture quality is automatically generated, the target application does not need to be manually operated by a user, and an automatic mode for generating the target image is provided.
And because the reference application and the target application belong to different versions of the same application, the reference application and the target application are respectively run directly according to the same running indication information to generate a reference image and a target image, so that the content of the reference image is similar to that of the target image, thereby ensuring the consistency of the content of the reference image and the content of the target image, and ensuring the difference between the target image and the reference image in the picture quality so as to detect the picture quality of the target image by taking the reference image as a reference standard.
And under the condition that the operation indication information comprises a plurality of operation time points, the computer equipment respectively generates a reference image corresponding to each operation time point and a target image corresponding to each operation time point, so that the reference images and the target images are in one-to-one correspondence, and the image quality of the target images can be respectively detected according to the one-to-one correspondence reference images.
In addition, the reference image and the target image are respectively divided into image blocks, and the image quality detection is performed by taking the image blocks as units.
In addition, since the picture quality detection result of the target image corresponding to each operation time point can be obtained, when the picture quality detection results of a plurality of continuous target images are abnormal, the operation time points corresponding to the plurality of continuous target images can be directly determined, so that the reason of the abnormal situation can be inquired at the plurality of operation time points, and the accurate positioning of the problem of the abnormal picture quality of the target application is facilitated.
In the embodiment of fig. 3, the operation indication information includes the operation identifier as an example, which illustrates a process of running the reference application and the target application respectively according to the operation indication information, in another embodiment, the reference application and the target application include a virtual scene, the virtual scene includes a virtual object, the operation indication information includes a position of the virtual object in the virtual scene, and then the process of running the reference application and the target application respectively according to the operation indication information is described in detail in the embodiment of fig. 6 below.
Fig. 6 is a flowchart of another picture quality detection method according to an embodiment of the present application. The embodiment of the application is executed by a computer device, and referring to fig. 6, the method comprises the following steps:
601. the computer equipment responds to the operation instruction in the reference application, and in the reference application, the virtual object in the virtual scene is controlled to execute the operation corresponding to the operation instruction.
In order to generate the running instruction information, the virtual object in the reference application may be controlled to execute a corresponding operation under the operation of a tester, so as to record the position of the virtual object in the virtual scene as the running instruction information.
Therefore, the tester executes the operation of controlling the virtual object in the reference application of the computer device, the computer device generates an operation instruction corresponding to the operation after detecting the operation of the tester, and the virtual object in the virtual scene is controlled to execute the operation corresponding to the operation instruction in the reference application in response to the operation instruction.
For example, the reference application is a game application, the tester performs an operation of controlling virtual object jumping in the reference application, the operation instruction corresponding to the operation is a jump instruction, and the computer device controls virtual object jumping in the reference application in response to the jump instruction.
602. The computer device acquires a position of the virtual object in the virtual scene and generates operation instruction information, wherein the operation instruction information comprises the position.
After the computer equipment controls the virtual object according to the operation instruction, the position of the virtual object in the virtual scene is obtained, the operation indication information comprising the position is generated, and then the reference application or the target application can be operated according to the position in the operation indication information. Optionally, the position may be coordinate information of the virtual object in the virtual scene, and the like, which is not limited in this application embodiment.
In a possible implementation manner, after the computer device generates the operation indication information, the operation indication information is stored locally, so that the operation indication information can be directly acquired when the operation indication information needs to be used later. In another possible implementation manner, after the computer device generates the operation instruction information, the computer device may further send the operation instruction information to a server, so that when other subsequent devices need to use the operation instruction information, the computer device requests the server for the operation instruction information.
It should be noted that, in the embodiment of the present application, only taking the obtaining of the position once as an example, a process of generating the operation instruction information including the position is described, while in another embodiment, in a process of controlling a virtual object in a virtual scene, a display time point is determined every target time length, and every time a display time point is reached, a position of the virtual object in the virtual scene is obtained and added to the operation instruction information, so that the operation instruction information includes a plurality of positions arranged in sequence.
It should be noted that, in the embodiment of the present application, only the process of generating the operation indication information is described by taking the reference application as an example. In another embodiment, since it is only necessary to ensure that the generated operation indication information can indicate how the reference application and the target application operate, the computer device may also generate the operation indication information by operating other versions of applications belonging to the same application as the reference application, for example, the computer device generates the operation indication information by operating the target application.
It should be noted that, in the embodiment of the present application, the operation instruction information is only described as an example that the computer device automatically generates the operation instruction information, in another embodiment, the operation instruction information may also be an operation script written by a tester, and the tester uploads the operation script to the computer device, so that the computer device does not need to execute the step 601 and 602.
603. The computer device obtains operation instruction information.
The process of obtaining the operation indication information in step 603 is the same as the process of obtaining the operation indication information in step 302, and is not described herein again.
604. And in the reference application, the computer equipment controls the virtual object in the virtual scene to be displayed at the position in the operation instruction information, and captures the obtained picture to obtain a reference image.
The computer equipment is provided with a reference application, after the running instruction information is obtained, the computer equipment obtains the position in the running instruction information, and controls the virtual object in the virtual scene to be displayed at the position in the reference application, so that the picture comprising the virtual scene is displayed in the reference application, and the picture is subjected to screenshot to obtain a reference image.
In the embodiment of the application, the virtual object can be automatically controlled to be displayed at the position in the reference application by operating the position in the indication information, so that the reference image used as the reference standard is automatically generated, the reference application does not need to be manually operated by a user, and an automatic mode for generating the reference image is provided.
In one possible implementation, the operation indication information includes a plurality of display time points and a position corresponding to each display time point. And the computer equipment controls the virtual object to be displayed at a position corresponding to the display time point when reaching one display time point each time in the reference application, and captures the obtained picture to obtain a reference image corresponding to the display time point.
The display time points are time points of operations in the reference application, for example, the number of the display time points is 10000, which are respectively 0.1 second, 0.2 second and 0.3 second, and up to 1000 seconds, that is, the total time for running the reference application is 1000 seconds. In the multiple display time points, each display time point corresponds to a position, and the position corresponding to the display time point is used for indicating the position displayed by the virtual object at the display time point, so that the computer device controls the virtual object to be displayed at the position corresponding to the display time point for the currently arrived display time point in the reference application, and captures the displayed picture to obtain a reference image corresponding to the display time point, and therefore the computer device can finally obtain the reference image corresponding to each display time point, and thus multiple reference images are obtained.
In another possible implementation manner, the running indication information further includes a target operation identifier, and the target operation identifier is used for indicating an operation performed by the virtual object at the position. And the computer equipment controls the virtual object to be displayed at the position in the reference application, controls the virtual object to execute the operation indicated by the target operation identifier at the position, and captures the obtained picture to obtain a reference image.
That is, the operation indication information is used to indicate that the operation indicated by the target operation identifier is to be executed while the control virtual object is displayed at the position. For example, the operation indicated by the target operation identifier is a shooting operation, a picking operation, a dancing operation, an attacking operation, or the like, which is not limited in this embodiment of the application.
605. And in the target application, the computer equipment controls the virtual object in the virtual scene to be displayed at the position in the operation instruction information, and captures the obtained picture to obtain a target image.
The method comprises the steps that a target application is installed on computer equipment, after operation instruction information is obtained, the computer equipment obtains a position in the operation instruction information, a virtual object in a virtual scene is controlled to be displayed at the position in the target application, so that a picture comprising the virtual scene is displayed in the target application, the picture is captured to obtain a target image, the picture quality of the target image can represent the picture quality of the target application as the target image is obtained by capturing the picture of the target application, and the computer equipment can detect the picture quality of the target image.
In the embodiment of the application, the virtual object can be automatically controlled to be displayed at the position in the target application by operating the position in the indication information, so that the target image for detecting the picture quality is automatically generated, a user does not need to manually operate the target application, and an automatic mode for generating the target image is provided.
And because the reference application and the target application belong to different versions of the same application, the reference application and the target application are respectively run directly according to the same running indication information to generate a reference image and a target image, so that the content of the reference image is similar to that of the target image, thereby ensuring the consistency of the content of the reference image and the content of the target image, and ensuring the difference between the target image and the reference image in the picture quality so as to detect the picture quality of the target image by taking the reference image as a reference standard.
In one possible implementation, the operation indication information includes a plurality of display time points and a position corresponding to each display time point. And the computer equipment controls the virtual object to be displayed at a position corresponding to the display time point when reaching one display time point each time in the target application, and captures the obtained picture to obtain a target image corresponding to the display time point.
In another possible implementation manner, the running indication information further includes a target operation identifier, and the target operation identifier is used for indicating an operation performed by the virtual object at the position. And the computer equipment controls the virtual object to be displayed at the position in the target application, controls the virtual object to execute the operation indicated by the target operation identifier at the position, and captures the obtained picture to obtain a target image.
In the embodiment of the application, under the condition that the operation indication information includes a plurality of display time points, the computer device generates a reference image corresponding to each display time point and a target image corresponding to each display time point respectively, so that the reference images correspond to the target images one to one, and the image quality of the target images is detected respectively according to the one-to-one corresponding reference images.
The specific process of generating the target image in step 605 is the same as the specific process of generating the reference image in step 604, and is not described in detail herein.
606. The computer device divides the reference image into a plurality of reference image blocks, divides the target image into a plurality of target image blocks, and the target image blocks correspond to the reference image blocks one to one.
607. And the computer equipment compares the plurality of target image blocks with the corresponding reference image blocks to obtain the image quality detection results of the plurality of target image blocks.
608. And the computer equipment fuses the image quality detection results of the plurality of target image blocks to obtain the image quality detection result of the target image.
The process of obtaining the picture quality detection result of the target image in the step 606-.
In the method provided by the embodiment of the application, in order to detect the picture quality of the target application, reference applications which belong to the same application as the target application but belong to different versions are introduced, the reference applications and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and a picture quality detection result of the target image is obtained.
And, through the position in the running instruction information, can control the virtual object to display in this position in consulting the application automatically, thus produce the reference image used for as the reference standard automatically, do not need this reference application of manual operation of user, have provided a mode of automation of producing the reference image.
Furthermore, by operating the position in the instruction information, the virtual object can be automatically controlled to be displayed at the position in the target application, so that the target image for detecting the picture quality can be automatically generated, the target application does not need to be manually operated by a user, and an automatic mode for generating the target image is provided.
And because the reference application and the target application belong to different versions of the same application, the reference application and the target application are respectively run directly according to the same running indication information to generate a reference image and a target image, so that the content of the reference image is similar to that of the target image, thereby ensuring the consistency of the content of the reference image and the content of the target image, and ensuring the difference between the target image and the reference image in the picture quality so as to detect the picture quality of the target image by taking the reference image as a reference standard.
And under the condition that the operation indication information comprises a plurality of display time points, the computer equipment respectively generates a reference image corresponding to each display time point and a target image corresponding to each display time point, so that the reference images and the target images are in one-to-one correspondence, and the image quality of the target images can be respectively detected according to the one-to-one correspondence reference images.
The above-described embodiments of fig. 3 and 6 illustrate the process of acquiring the reference image and the target image, respectively. In another embodiment, the reference image and the target image are selected from the candidate reference image and the candidate target image, so as to ensure consistency between the reference image and the target image, and the selecting process of the reference image and the target image is described in detail in the embodiment of fig. 7 below.
Fig. 7 is a flowchart of another picture quality detection method according to an embodiment of the present application. The embodiment of the application is executed by a computer device, and referring to fig. 7, the method comprises the following steps:
701. the computer device obtains operation instruction information.
The process of obtaining the operation indication information in step 701 is the same as the process of obtaining the operation indication information in step 302, and is not described herein again.
702. And the computer equipment respectively runs the reference application and the target application according to the running indication information to obtain image pairs respectively corresponding to the multiple time points, wherein the image pairs comprise candidate reference images and candidate target images.
And the computer equipment runs the reference application according to the running indication information to obtain candidate reference images corresponding to a plurality of time points, wherein the candidate reference images corresponding to the time points are generated by the reference application at the time points. And the computer equipment runs the target application according to the running indication information to obtain candidate target images corresponding to the multiple time points, wherein the candidate target images corresponding to the time points are generated by the target application at the time points. And the candidate reference image and the candidate target image corresponding to each time point form an image pair corresponding to the time point. The content of the candidate reference image and the content of the candidate target image in the image pair are similar.
The process of obtaining the candidate reference image and the candidate target image by the computer device is described in detail in the above embodiment of fig. 3 or fig. 6, and is not described herein again.
703. The computer device determines a similarity between the candidate reference image and the candidate target image in each image pair.
For each image pair, the computer device determines a similarity between the candidate reference image and the candidate target image in the image pair, resulting in a similarity between the candidate reference image and the candidate target image in each image pair.
In one possible implementation, the computer device divides the candidate reference image into a plurality of reference image blocks, divides the candidate target image into a plurality of target image blocks, and the reference image blocks are in one-to-one correspondence with the target image blocks. The computer device determines a similarity between each reference image block and the corresponding target image block, and determines a minimum similarity among the plurality of similarities as a similarity between the candidate reference image and the candidate target image.
Because the purpose of determining the similarity between the candidate reference image and the candidate target image in the embodiment of the present application is to screen out an image pair with higher consistency according to the similarity, and the consistency of the image pair is limited by the minimum similarity among the similarities corresponding to the image blocks, for example, 10 image blocks are respectively included in the candidate reference image and the candidate target image, the similarity corresponding to 1 image block is very low, and even if the similarities corresponding to the other 9 image blocks are all very high, the consistency between the candidate reference image and the candidate target image is still very low. Accordingly, the computer device determines the minimum similarity among the plurality of similarities as the similarity between the candidate reference image and the candidate target image.
Optionally, the computer device determines a similarity between the reference image block and the target image block using the following formula.
Figure BDA0003458887010000261
Figure BDA0003458887010000262
Wherein the similarity between the reference image block and the target image block is PSNR(Peak Signal to Noise Ratio ) represents, log10(. DEG) represents a logarithm with a base 10, MSE (Mean Square Error) represents the difference between the content of the reference image block and the content of the target image block, H represents the number of pixels of the reference image block/the target image block in the length direction, W represents the number of pixels of the reference image block/the target image block in the width direction, i is a positive integer not greater than H, and j is a positive integer not greater than W. X (i, j) represents the pixel value of the pixel point of the reference image block in the ith row and the jth column, and Y (i, j) represents the pixel value of the pixel point of the target image block in the ith row and the jth column.
704. The computer device screens out image pairs corresponding to similarity meeting target conditions from the plurality of image pairs, and acquires a reference image and a target image from the screened image pairs.
The higher the similarity between the candidate reference image and the candidate target image is, the higher the consistency between the content of the candidate reference image and the content of the candidate target image is, the greater the proportion of the difference between the image qualities of the images in the difference between the candidate reference image and the candidate target image is, the higher the accuracy of the image quality detection of the candidate target image according to the candidate reference image is. The lower the similarity between the candidate reference image and the candidate target image is, the lower the consistency between the content of the candidate reference image and the content of the candidate target image is, the smaller the proportion of the difference between the image qualities of the images in the difference between the candidate reference image and the candidate target image is, and therefore, the lower the accuracy of the image quality detection of the candidate target image according to the candidate reference image is. Therefore, the computer device screens out an image pair having a similarity that meets a target condition, acquires a reference image and a target image from the screened image pair, uses the reference image as a reference standard for picture quality detection, and uses the target image as an image for picture quality detection.
In one possible implementation, the computer device determines a median among the similarities of the plurality of image pairs, the similarity meeting the target condition being a similarity higher than the median. Or the similarity meeting the target condition is a similarity higher than a preset threshold, or the similarity meeting the target condition is a similarity obtained by arranging a number of previous targets in descending order, and the like.
In the embodiment of the application, the fact that running processes of the reference application and the target application are difficult to be completely consistent results in differences between a part of candidate reference images and corresponding candidate target images, therefore, the candidate reference images and the candidate target images with the differences are excluded according to the similarity between the candidate reference images and the candidate target images, so that the consistency of the screened reference images and the target images is guaranteed, and the accuracy of picture quality detection is further guaranteed.
For example, the reference application and the target application are game applications, the reference image and the target image are images generated during a game, and when the computer device runs the reference application or the target application according to the running instruction information, NPC (Non-Player Character) may randomly appear in a virtual scene of the reference application or the target application, resulting in inconsistency between content of the reference image generated by the reference application and content of the target image generated by the target application. Fig. 8 is a schematic diagram of a reference image and a target image according to an embodiment of the present application, and as shown in fig. 8, a reference image 801 and a target image 802 correspond to a same point in time, where the reference image 801 includes a virtual object 811, and the target image 802 includes a virtual object 811 and a virtual object 812, and the virtual object 812 is a randomly-appearing NPC.
705. The computer device divides the reference image into a plurality of reference image blocks, divides the target image into a plurality of target image blocks, and the target image blocks correspond to the reference image blocks one to one.
706. And the computer equipment compares the plurality of target image blocks with the corresponding reference image blocks to obtain the image quality detection results of the plurality of target image blocks.
707. And the computer equipment fuses the image quality detection results of the plurality of target image blocks to obtain the image quality detection result of the target image.
The process of obtaining the picture quality detection result of the target image in the step 705-.
In the method provided by the embodiment of the application, in order to detect the picture quality of the target application, reference applications which belong to the same application as the target application but belong to different versions are introduced, the reference applications and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and a picture quality detection result of the target image is obtained.
In addition, considering that the running processes of the reference application and the target application are difficult to be completely consistent, and a part of candidate reference images and corresponding candidate target images have differences, the candidate reference images and the candidate target images with the differences are excluded according to the similarity between the candidate reference images and the candidate target images, so that the consistency of the screened reference images and the target images is ensured, and the accuracy of the picture quality detection is further ensured.
Fig. 9 is a flowchart of still another picture quality detection method provided in an embodiment of the present application, in this embodiment, taking picture quality detection of a horizontal action game as an example, where a reference game application and a target game application are different versions of the same horizontal action game application, the reference game application and the target game application include virtual scenes, the virtual scenes include virtual objects for playing the action game, where the reference game application is a qualified version that passes picture quality detection, and the target game application is a to-be-tested version that has not yet undergone picture quality detection, see fig. 9, and the method includes the following steps.
901. The terminal installs a reference game application, responds to a game operation instruction in the reference game application, controls a virtual object in a virtual scene to execute game operation corresponding to the game operation instruction in the reference game application, obtains a game picture, and generates corresponding game running instruction information.
Optionally, the game running instruction information includes an operation identifier corresponding to the game operation. Optionally, the game running instruction information includes a position of the virtual object in the virtual scene. For example, the game operation command is a command for controlling the virtual object to shoot, hit, jump, or release a skill.
902. And the terminal runs the reference game application according to the game running instruction information to obtain a game picture in the reference game application, and captures the game picture to obtain a reference game image.
903. And the terminal installs the target game application, operates the target game application according to the game operation instruction information to obtain a game picture in the target game application, and captures the game picture to obtain a target game image.
904. And the terminal compares the target game image with the reference game image to obtain the picture quality detection result of the target game image.
Since the reference game image is obtained by capturing the game screen in the reference game application, the target game image is obtained by capturing the game screen in the target game application, and the game screen in the reference game application and the game screen in the target game application are both the reproduction of the game screen in the step 901, the screen content of the reference game image and the screen content of the target game image are the same, and the difference is the screen quality, so that the screen quality detection result of the target game image can be obtained by comparing with the reference game image as the reference standard. The picture quality detection result can represent the picture quality of the target game application, so that the reference picture quality detection of the target game application is realized.
In another embodiment, the computer device may call a picture quality detection model, and compare the target image with the reference image to obtain a picture quality detection result. The picture quality detection model is used for detecting the picture quality of the image. Optionally, the picture quality detection model is formed by a lightweight deep network, and the lightweight deep network has a small parameter, and is a small-sized deep network suitable for a Central Processing Unit (CPU) and an embedded device. The process of training the picture quality detection model is described in detail in the embodiment of fig. 12 below.
In one possible implementation, as shown in fig. 10, the picture quality detection model includes a feature extraction network 1001, a feature fusion network 1002, and a feature detection network 1003. The feature extraction network 1001 is connected to the feature fusion network 1002, and the feature fusion network 1002 is connected to the feature detection network 1003. The feature extraction network 1001 is used for extracting the picture quality features of the reference image and the target image, the feature fusion network 1002 is used for fusing the picture quality features of the images, and the feature detection network 1003 is used for detecting the picture quality of the fused features output by the feature fusion network 1002.
Alternatively, as shown in fig. 10, the feature extraction network 1001 includes a first extraction layer 1011 and a second extraction layer 1021, the weight of the first extraction layer 1011 and the weight of the second extraction layer 1021 are shared, the first extraction layer 1011 is used for extracting the picture quality feature of the reference image, and the second extraction layer 1021 is used for extracting the picture quality feature of the target image. Alternatively, as shown in fig. 10, the first extraction layer 1011 is a CNN (Convolutional Neural Network) composed of Convolutional layers, the first extraction layer 1011 includes 3 Convolutional layers and a full-link layer, the kernel size of the first Convolutional layer is 4, the step size is 2, the output dimension is 16, the kernel size of the second Convolutional layer is 4, the step size is 2, the output dimension is 32, the kernel size of the third Convolutional layer is 4, the step size is 2, the output dimension is 64, the output dimension of the full-link layer is 512, and the Network structure of the second extraction layer 1021 is the same as that of the first extraction layer.
Alternatively, as shown in fig. 10, the feature detection network 1003 includes two fully-connected layers, the first fully-connected layer outputting dimension 512 and the second fully-connected layer outputting dimension 5.
The process of the computer device invoking the picture quality detection model for picture quality detection is described in detail below with respect to the embodiment of fig. 11. Fig. 11 is a flowchart of another picture quality detection method according to an embodiment of the present application. The embodiment of the application is executed by a computer device, and referring to fig. 11, the method includes:
1101. the computer device obtains operation instruction information.
1102. And the computer equipment respectively runs the reference application and the target application according to the running indication information to obtain a reference image and a target image.
The process of acquiring the operation instruction information in step 1101, and the process of generating the reference image and the target image in step 1102 are described in detail in the above embodiments, and are not described herein again.
1103. And calling a feature extraction network in the picture quality detection model by the computer equipment, and performing feature extraction on the reference image and the target image to obtain a first picture quality feature of the reference image and a second picture quality feature of the target image.
The computer equipment inputs the reference image and the target image into the feature extraction network, the feature extraction network performs feature extraction on the reference image to obtain a first picture quality feature, and the feature extraction network performs feature extraction on the target image to obtain a second picture quality feature. Wherein the first picture quality characteristic is used for representing the picture quality of the reference image, and the second picture quality characteristic is used for representing the picture quality of the target image.
In one possible implementation, the feature extraction network includes a first extraction layer and a second extraction layer. The computer device calls the first extraction layer to extract the features of the reference image to obtain first picture quality features, and calls the second extraction layer to extract the features of the target image to obtain second picture quality features. That is, the computer device inputs the reference image into the first extraction layer, the first extraction layer outputs the first picture quality feature, the target image into the second extraction layer, and the second extraction layer outputs the second picture quality feature.
1104. And calling a feature fusion network in the picture quality detection model by the computer equipment, and fusing the first picture quality feature, the second picture quality feature and the difference feature between the first picture quality feature and the second picture quality feature to obtain a fusion feature.
The computer equipment inputs the first picture quality characteristic and the second picture quality characteristic into a characteristic fusion network, and the characteristic fusion network fuses the first picture quality characteristic, the second picture quality characteristic and the difference characteristic between the first picture quality characteristic and the second picture quality characteristic to obtain the fusion characteristic.
In one possible implementation, the difference feature between the first picture quality feature and the second picture quality feature is a difference between the second picture quality feature and the first picture quality feature.
In one possible implementation, the computer device invokes a feature fusion network to cascade the first picture quality feature, the second picture quality feature, and the difference feature between the first picture quality feature and the second picture quality feature to obtain the fusion feature.
For example, the computer device uses the following formula to obtain the fusion features.
f=[f1,f2,f1-f2];
Wherein f represents a fusion feature, f1Representing a second picture quality characteristic, f2Representing a first picture quality characteristic, f1-f2Representing a difference feature between the second picture quality feature and the first picture quality feature [. ]]Indicating a cascading operation.
1105. And calling a feature detection network in the picture quality detection model by the computer equipment, and detecting the fusion features to obtain a picture quality detection result of the target image.
And the computer equipment inputs the fusion characteristics into the characteristic detection network, and the characteristic detection network detects the fusion characteristics to obtain a picture quality detection result, wherein the picture quality detection result is the picture quality detection result of the target image.
In the present application, only the picture quality detection result obtained by directly detecting one target image will be described as an example. In another embodiment, the computer device divides the reference image and the target image into a plurality of image blocks, and invokes the image quality detection model to perform image quality detection in units of image blocks, and the specific process is described in detail in the embodiment of fig. 3 or fig. 6, and is not described herein again.
In the method provided by the embodiment of the application, in order to detect the picture quality of the target application, reference applications which belong to the same application as the target application but belong to different versions are introduced, the reference applications and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and a picture quality detection result of the target image is obtained.
In addition, in the embodiment of the application, the picture quality detection model is called to extract the picture quality characteristics of the image, and the picture quality detection is carried out based on the picture quality characteristics, so that the accuracy of the picture quality detection is improved, and the automation of the picture quality detection is realized.
In the process of training the picture quality detection model, a plurality of iterative processes need to be performed, and the embodiment of the application takes any iterative process as an example to explain the training process of the model. Fig. 12 is a flowchart of a method for training a picture quality detection model according to an embodiment of the present application. An execution subject of the embodiment of the present application is a computer device, and referring to fig. 12, the method includes:
1201. the computer device obtains a sample reference image, a sample target image, and a sample picture quality detection result of the sample target image.
The process of acquiring the sample reference image and the corresponding sample target image by the computer device is the same as the process of acquiring the reference image and the corresponding target image in the above embodiments, and is not described in detail herein.
The sample picture quality detection result of the sample target image is used to represent the picture quality of the sample target image, and in one possible implementation, the sample picture quality detection result is obtained by a tester by comparing the picture quality of the sample reference image and the sample target image. The sample picture quality detection result is the same as the picture quality detection result in the embodiment of fig. 3, and is not repeated herein.
1202. And calling a picture quality detection model by the computer equipment, and comparing the sample target image with the sample reference image to obtain a picture quality detection result of the sample target image.
The process of obtaining the picture quality detection result of the sample target image in step 1202 is the same as the process of obtaining the picture quality detection result of the target image in the embodiment of fig. 12, and is not repeated here.
1203. The computer device trains a picture quality detection model based on the picture quality detection result of the sample target image and the sample picture quality detection result.
Since the sample picture quality detection result is the true detection result of the sample target image and the picture quality detection result obtained in step 1202 is the detection result predicted by the picture quality detection model, the computer device can determine whether the picture quality detection result is accurate according to the sample picture quality detection result, and further can determine whether the picture quality detection model is accurate, and then the computer device updates the model parameters of the picture quality detection model based on the picture quality detection result of the sample target image and the sample picture quality detection result, so that the picture quality detection result predicted by the updated picture quality detection model is more accurate.
In a possible implementation manner, considering that the closer the picture quality detection result is to the sample picture quality detection result, the more accurate the picture quality detection result is, that is, the more accurate the picture quality detection model is, therefore, the computer device updates the model parameters of the picture quality detection model based on the picture quality detection result of the sample target image and the difference between the sample picture quality detection results to obtain an updated picture quality detection model, so that the difference between the picture quality detection result predicted by the updated picture quality detection model and the sample picture quality detection result becomes smaller, thereby obtaining a more accurate picture quality detection model. Alternatively, the electronic device determines a picture quality detection result of the sample target image and a loss value between the sample picture quality detection results, the loss value representing a difference between the picture quality detection result of the sample target image and the sample picture quality detection result, and the computer device trains the picture quality detection model based on the loss value.
The above steps 1201 to 1203 describe a procedure of training a picture quality detection model based on a picture quality detection result of a sample target image and a sample picture quality detection result, taking only one sample target image as an example. In another embodiment, the computer device repeats the above steps 1201-1203 by using different sample reference images, sample target images and sample picture quality detection results, and iteratively updates the picture quality detection model. The computer device stops training the picture quality detection model in response to the iteration round reaching a first threshold; or stopping training the picture quality detection model in response to the loss value obtained in the current iteration turn not being larger than the second threshold value. The first threshold and the second threshold are both arbitrary values, and for example, the first threshold is 100 or 200, and the second threshold is 0.002 or 0.003.
In one possible implementation, the picture quality detection result includes a plurality of quality levels and a score corresponding to each quality level, the score corresponding to a quality level indicating a likelihood that the picture quality of the sample target image belongs to the quality level. The computer device determines picture quality detection results of the plurality of sample target images and a loss value between the sample picture quality detection results using the following formula.
Figure BDA0003458887010000331
Wherein N represents the number of sample target images, N is a positive integer not greater than N, the picture quality detection result comprises 5 quality levels, c is a positive integer not greater than 5, and xnRepresenting the nth sample object image, Dc(xn) And the score corresponding to the c-th quality grade of the nth sample target image in the picture quality detection result is shown. y isn,cIndicates the score corresponding to the c-th quality level of the n-th sample target image in the sample detection result, e.g., y when the picture quality of the sample target image belongs to the c-th quality leveln,cEqual to 1, when the picture quality of the sample target image does not belong to the c-th quality level, yn,cEqual to 0.
According to the method provided by the embodiment of the application, the picture quality detection model automatically learns the depth characteristics related to the picture quality detection in the image by training the picture quality detection model, so that the picture quality detection is performed according to the learned depth characteristics, and the accuracy of the picture quality detection model is improved.
Fig. 13 is a flowchart of still another picture quality detection method provided in an embodiment of the present application, where the reference application and the target application are game applications, for example, the game applications are a horizontal action game application, a 2D game application, or a 3D game application, and the game applications include virtual scenes, and the virtual scenes include virtual objects, referring to fig. 13, the method includes the following steps.
1301. The computer device records the game. In the embodiment of the present application, recording a game means recording a position of a virtual object and an operation performed by a player during a game, and generating operation instruction information including the position of the virtual object and the operation of the player.
The tester selects a game stage from game applications (reference applications or target applications) and enters game play, and the game stage can be a game stage in a single-player mode in order to conveniently and subsequently re-run the reference applications and the target applications directly according to the positions of the virtual objects and player operations. During game play, the computer device records the position and operation of the virtual object in each frame of game picture.
1302. The computer device plays back the game in the reference application and the target application, respectively, generating a game image pair comprising the reference game image and the target game image.
In the embodiment of the present application, playing back a game means that a reference application and a target application are respectively run according to the running instruction information generated in step 1301, so that the game play is reproduced in the reference application and the target application. The first mode is to play back the player operation by using a game function, for example, the operation of the player for operating a joystick, and the second mode is to directly play back the position of the virtual object, that is, to directly display the virtual object at the corresponding position according to the position in the operation instruction information. In addition, when the playback is performed in the second mode, operations performed by the player on the virtual object at the position, such as a shooting operation, a jumping operation, or a skill releasing operation, may also be simultaneously played back using the game function.
1303. The computer device removes the inconsistent game image pair. Because NPC which randomly appears in the virtual scene in the game-playing process, in order to ensure the consistency of the two images in the game image pair, the game image pair with lower similarity is removed by determining the similarity between the images in each game image pair.
1304. And constructing sample data of model training. And the computer equipment takes the screened game image pairs as sample data, and the tester performs manual picture quality detection on the target game image in each game image pair to obtain a sample picture quality detection result corresponding to each target game image.
1305. The computer device trains a picture quality detection model using the game image pairs and the corresponding sample picture quality detection results.
1306. And calling the trained picture quality detection model by the computer equipment, comparing any target image with the corresponding reference image, and outputting the picture quality detection result of the target image.
Fig. 14 is a schematic structural diagram of a picture quality detection apparatus according to an embodiment of the present application. Referring to fig. 14, the apparatus includes:
an information obtaining module 1401, configured to obtain operation instruction information;
an image generating module 1402, configured to run a reference application and a target application respectively according to the running indication information to obtain a reference image and a target image, where the reference image is generated by the reference application in a running process, the target image is generated by the target application in the running process, and the reference application and the target application are different versions of the same application;
a picture quality detection module 1403, configured to compare the target image with the reference image to obtain a picture quality detection result of the target image, where the picture quality detection result is used to indicate the picture quality of the target application.
In the picture quality detection device provided by the embodiment of the application, in order to detect the picture quality of the target application, reference applications which belong to the same application as the target application but belong to different versions are introduced, the reference applications and the target application are respectively operated according to the same operation indication information, so that a reference image and a target image are automatically generated, the reference image is used as a comparison standard, the picture quality of the target image is detected, and a picture quality detection result of the target image is obtained.
Optionally, referring to fig. 15, the operation instruction information includes an operation identifier, and the image generation module 1402 includes:
a first generating unit 1412, configured to execute an operation indicated by the operation identifier in the reference application, and perform screenshot on the obtained picture to obtain a reference image;
a second generating unit 1422, configured to execute an operation indicated by the operation identifier in the target application, and perform screenshot on the obtained screen to obtain a target image.
Optionally, referring to fig. 15, the reference application and the target application include a virtual scene, the operation identifier is used to indicate an operation performed by a virtual object in the virtual scene, and the first generating unit 1412 is configured to control the virtual object in the virtual scene to perform the operation indicated by the operation identifier in the reference application, and capture a screenshot of a resulting screen to obtain a reference image;
a second generating unit 1422, configured to, in the target application, control a virtual object in the virtual scene to execute an operation indicated by the operation identifier, and capture a screenshot of the obtained screen to obtain a target image.
Optionally, referring to fig. 15, the operation indication information includes a plurality of operation time points and an operation identifier corresponding to at least one operation time point;
a first generating unit 1412, configured to execute an operation indicated by the operation identifier corresponding to the operation time point each time when the operation time point is reached in the reference application, and capture an obtained picture to obtain a reference image corresponding to the operation time point;
the second generating unit 1422 is configured to, in the target application, execute an operation indicated by the operation identifier corresponding to an operation time point each time when the operation time point arrives, and capture an obtained picture to obtain a target image corresponding to the operation time point.
Optionally, referring to fig. 15, the apparatus further comprises:
the information generating module 1404 is configured to, in response to the operation instruction in the reference application, execute an operation corresponding to the operation instruction in the reference application, and generate operation instruction information, where the operation instruction information includes an operation identifier corresponding to the operation.
Alternatively, referring to fig. 15, the reference application and the target application include a virtual scene, the running instruction information includes a position of the virtual object in the virtual scene, and the image generation module 1402 includes:
a first generating unit 1412, configured to control a virtual object in a virtual scene to be displayed at a position in a reference application, and capture a screenshot of an obtained picture to obtain a reference image;
the second generating unit 1422 is configured to, in the target application, control a virtual object in a virtual scene to be displayed at a position, and capture a screenshot of an obtained screen to obtain a target image.
Alternatively, referring to fig. 15, the operation instruction information includes a plurality of display time points and a position corresponding to each display time point;
the first generating unit 1412 is configured to control the virtual object to be displayed at a position corresponding to a display time point each time when the display time point is reached in the reference application, and capture an obtained picture to obtain a reference image corresponding to the display time point;
the second generating unit 1422 is configured to, in the target application, control the virtual object to be displayed at a position corresponding to a display time point each time when a display time point is reached, and capture a screenshot of the obtained screen to obtain a target image corresponding to the display time point.
Optionally, referring to fig. 15, the running indication information further includes a target operation identifier, where the target operation identifier is used to indicate an operation performed by the virtual object at the location;
a first generating unit 1412, configured to control the virtual object to be displayed at the position in the reference application, control the virtual object to perform an operation indicated by the target operation identifier at the position, and capture an obtained picture to obtain a reference image;
the second generating unit 1422 is configured to, in the target application, control the virtual object to be displayed at the position, control the virtual object to perform an operation indicated by the target operation identifier at the position, and capture a screenshot of the obtained screen to obtain a target image.
Optionally, referring to fig. 15, the apparatus further comprises:
an information generating module 1404, configured to respond to an operation instruction in a reference application, and in the reference application, control a virtual object in a virtual scene to execute an operation corresponding to the operation instruction;
the information generating module 1404 is further configured to obtain a position of the virtual object in the virtual scene, and generate operation instruction information, where the operation instruction information includes the position.
Optionally, referring to fig. 15, the image generation module 1402 includes:
an image pair generating unit 1432, configured to run the reference application and the target application respectively according to the running instruction information, to obtain image pairs corresponding to the multiple time points respectively, where the image pairs include a candidate reference image and a candidate target image, the candidate reference image is generated at a time point by the reference application, and the candidate target image is generated at a time point by the target application;
a similarity determining unit 1442 for determining a similarity between the candidate reference image and the candidate target image in each image pair;
an image pair screening unit 1452 for screening out an image pair having a corresponding similarity satisfying a target condition among the plurality of image pairs;
an image acquisition unit 1462 for acquiring a reference image and a target image from the screened image pair.
Optionally, referring to fig. 15, a similarity determining unit 1442 is configured to:
dividing the candidate reference image into a plurality of reference image blocks, dividing the candidate target image into a plurality of target image blocks, wherein the reference image blocks correspond to the target image blocks one to one;
determining the similarity between each reference image block and the corresponding target image block;
and determining the minimum similarity in the plurality of similarities as the similarity between the candidate reference image and the candidate target image.
Alternatively, referring to fig. 15, the picture quality detection module 1403 includes:
an image dividing unit 1413, configured to divide the reference image into a plurality of reference image blocks, and divide the target image into a plurality of target image blocks, where the target image blocks correspond to the reference image blocks one to one;
an image block comparison unit 1423, configured to compare the multiple target image blocks with corresponding reference image blocks, to obtain image quality detection results of the multiple target image blocks;
the fusing unit 1433 is configured to fuse the picture quality detection results of the multiple target image blocks to obtain a picture quality detection result of the target image.
Alternatively, referring to fig. 15, the picture quality detection module 1403 includes:
the model calling unit 1443 is configured to call a picture quality detection model, compare the target image with the reference image, and obtain a picture quality detection result.
Optionally, referring to fig. 15, the picture quality detection model includes a feature extraction network, a feature fusion network, and a feature detection network, and the model call unit 1443 is configured to:
calling a feature extraction network, and performing feature extraction on the reference image and the target image to obtain a first picture quality feature of the reference image and a second picture quality feature of the target image;
calling a feature fusion network, and fusing the first picture quality feature, the second picture quality feature and the difference feature between the first picture quality feature and the second picture quality feature to obtain a fusion feature;
and calling a feature detection network to detect the fusion features to obtain a picture quality detection result.
Optionally, referring to fig. 15, the feature extraction network includes a first extraction layer and a second extraction layer, and the model call unit 1443 is configured to:
calling a first extraction layer, and performing feature extraction on the reference image to obtain a first picture quality feature;
and calling a second extraction layer to extract the features of the target image to obtain a second picture quality feature.
Optionally, referring to fig. 15, the apparatus further comprises a model training module 1405 for:
acquiring a sample reference image, a sample target image and a sample picture quality detection result of the sample target image;
calling a picture quality detection model, and comparing the sample target image with the sample reference image to obtain a picture quality detection result of the sample target image;
and training a picture quality detection model based on the picture quality detection result of the sample target image and the sample picture quality detection result.
Alternatively, referring to fig. 15, the picture quality detection result includes a plurality of quality levels and a score corresponding to each quality level, the score corresponding to a quality level indicating a possibility that the picture quality of the target image belongs to the quality level, and the apparatus further includes:
the quality level determining module 1406 is configured to determine, as the quality level to which the picture quality of the target image belongs, the quality level corresponding to the highest score in the picture quality detection result.
It should be noted that: in the picture quality detection apparatus provided in the above embodiment, when performing picture quality detection, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the above described functions. In addition, the picture quality detection apparatus and the picture quality detection method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor, so as to implement the operations executed in the picture quality detection method of the foregoing embodiment.
Optionally, the computer device is provided as a terminal. Fig. 16 shows a schematic structural diagram of a terminal 1600 provided in an exemplary embodiment of the present application.
The terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (field Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit, image Processing interactor) for rendering and drawing content required to be displayed by the display screen. In some embodiments, the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. The memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1602 is used to store at least one computer program for being possessed by the processor 1601 to implement the picture quality detection methods provided by the method embodiments herein.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Optionally, the peripheral device comprises: at least one of radio frequency circuitry 1604, a display 1605, a camera assembly 1606, audio circuitry 1607, and a power supply 1608.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602 and the peripheral device interface 1603 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1604 may communicate with other devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 can be one, disposed on the front panel of the terminal 1600; in other embodiments, the display screens 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; in other embodiments, display 1605 can be a flexible display disposed on a curved surface or a folded surface of terminal 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. The front camera is disposed on the front panel of the terminal 1600, and the rear camera is disposed on the rear side of the terminal 1600. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of terminal 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
A power supply 1608 is used to provide power to the various components in terminal 1600. The power source 1608 may be alternating current, direct current, disposable or rechargeable. When the power supply 1608 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Optionally, the computer device is provided as a server. Fig. 17 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1700 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1701 and one or more memories 1702, where the memory 1702 stores at least one computer program, and the at least one computer program is loaded and executed by the processors 1701 to implement the methods provided by the method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is loaded and executed by a processor to implement the operations executed in the picture quality detection method of the foregoing embodiment.
The embodiment of the present application further provides a computer program product, which includes a computer program, and the computer program is loaded and executed by a processor to implement the operations performed in the picture quality detection method according to the foregoing embodiment. In some embodiments, the computer program according to the embodiments of the present application may be deployed to be executed on one computer device or on multiple computer devices located at one site, or may be executed on multiple computer devices distributed at multiple sites and interconnected by a communication network, and the multiple computer devices distributed at the multiple sites and interconnected by the communication network may constitute a block chain system.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and should not be construed as limiting the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (20)

1. A picture quality detection method, characterized in that the method comprises:
acquiring operation indication information;
respectively operating a reference application and a target application according to the operation indication information to obtain a reference image and a target image, wherein the reference image is generated in the operation process of the reference application, the target image is generated in the operation process of the target application, and the reference application and the target application are different versions of the same application;
and comparing the target image with the reference image to obtain a picture quality detection result of the target image, wherein the picture quality detection result is used for representing the picture quality of the target application.
2. The method according to claim 1, wherein the running indication information includes an operation identifier, and the running of the reference application and the target application according to the running indication information to obtain the reference image and the target image respectively includes:
executing the operation indicated by the operation identifier in the reference application, and performing screenshot on the obtained picture to obtain the reference image;
and executing the operation indicated by the operation identifier in the target application, and performing screenshot on the obtained picture to obtain the target image.
3. The method according to claim 2, wherein the reference application and the target application comprise a virtual scene, the operation identifier is used to indicate an operation performed by a virtual object in the virtual scene, the operation indicated by the operation identifier is performed in the reference application, and a screenshot is performed on a resulting screen to obtain the reference image, and the method comprises:
in the reference application, controlling the virtual object in the virtual scene to execute the operation indicated by the operation identifier, and performing screenshot on the obtained picture to obtain the reference image;
the executing the operation indicated by the operation identifier in the target application, and capturing the obtained picture to obtain the target image includes:
and in the target application, controlling the virtual object in the virtual scene to execute the operation indicated by the operation identifier, and capturing the obtained picture to obtain the target image.
4. The method according to claim 2, wherein the operation indication information comprises a plurality of operation time points and an operation identifier corresponding to at least one operation time point;
the executing the operation indicated by the operation identifier in the reference application, and capturing an obtained picture to obtain the reference image includes:
in the reference application, each time one operation time point is reached, executing the operation indicated by the operation identifier corresponding to the operation time point, and capturing an obtained picture to obtain a reference image corresponding to the operation time point;
the executing the operation indicated by the operation identifier in the target application, and capturing the obtained picture to obtain the target image includes:
in the target application, each time one operation time point is reached, the operation indicated by the operation identifier corresponding to the operation time point is executed, and the obtained picture is subjected to screenshot to obtain a target image corresponding to the operation time point.
5. The method of claim 2, wherein prior to obtaining the operation indication information, the method further comprises:
responding to an operation instruction in the reference application, executing an operation corresponding to the operation instruction in the reference application, and generating the running indication information, wherein the running indication information comprises an operation identifier corresponding to the operation.
6. The method according to claim 1, wherein the reference application and the target application comprise a virtual scene, the running indication information comprises a position of a virtual object in the virtual scene, and the running the reference application and the target application respectively according to the running indication information to obtain a reference image and a target image comprises:
in the reference application, controlling the virtual object in the virtual scene to be displayed at the position, and carrying out screenshot on the obtained picture to obtain the reference image;
and in the target application, controlling the virtual object in the virtual scene to be displayed at the position, and carrying out screenshot on the obtained picture to obtain the target image.
7. The method according to claim 6, wherein the operation indication information includes a plurality of display time points and a position corresponding to each display time point;
in the reference application, controlling the virtual object in the virtual scene to be displayed at the position, and capturing a picture to obtain the reference image, including:
in the reference application, each time one display time point is reached, the virtual object is controlled to be displayed at a position corresponding to the display time point, and the obtained picture is subjected to screenshot to obtain a reference image corresponding to the display time point;
in the target application, controlling the virtual object in the virtual scene to be displayed at the position, and capturing a picture to obtain the target image, including:
in the target application, each time one display time point is reached, the virtual object is controlled to be displayed at a position corresponding to the display time point, and the obtained picture is subjected to screenshot to obtain a target image corresponding to the display time point.
8. The method according to claim 6, wherein the running indication information further includes a target operation identifier, and the target operation identifier is used for indicating an operation performed by the virtual object at the position;
in the reference application, controlling the virtual object in the virtual scene to be displayed at the position, and capturing a picture to obtain the reference image, including:
in the reference application, controlling the virtual object to be displayed at the position, controlling the virtual object to execute the operation indicated by the target operation identifier at the position, and performing screenshot on the obtained picture to obtain the reference image;
in the target application, controlling the virtual object in the virtual scene to be displayed at the position, and capturing a picture to obtain the target image, including:
and in the target application, controlling the virtual object to be displayed at the position, controlling the virtual object to execute the operation indicated by the target operation identifier at the position, and capturing the obtained picture to obtain the target image.
9. The method of claim 6, wherein prior to obtaining the operation indication information, the method further comprises:
responding to an operation instruction in the reference application, and controlling the virtual object in the virtual scene to execute an operation corresponding to the operation instruction in the reference application;
and acquiring the position of the virtual object in the virtual scene, and generating the operation instruction information, wherein the operation instruction information comprises the position.
10. The method according to any one of claims 1 to 9, wherein the running a reference application and a target application respectively according to the running indication information to obtain a reference image and a target image comprises:
according to the running indication information, respectively running the reference application and the target application to obtain image pairs respectively corresponding to a plurality of time points, wherein the image pairs comprise candidate reference images and candidate target images, the candidate reference images are generated by the reference application at the time points, and the candidate target images are generated by the target application at the time points;
determining a similarity between the candidate reference image and the candidate target image in each of the image pairs;
screening out image pairs with corresponding similarity meeting target conditions from the plurality of image pairs;
and acquiring the reference image and the target image from the screened image pair.
11. The method of claim 10, wherein determining the similarity between the candidate reference image and the candidate target image in each of the image pairs comprises:
dividing the candidate reference image into a plurality of reference image blocks, dividing the candidate target image into a plurality of target image blocks, wherein the reference image blocks correspond to the target image blocks one to one;
determining the similarity between each reference image block and the corresponding target image block;
determining a minimum similarity among the plurality of similarities as a similarity between the candidate reference image and the candidate target image.
12. The method according to any one of claims 1 to 9, wherein the comparing the target image with the reference image to obtain the detection result of the picture quality of the target image comprises:
dividing the reference image into a plurality of reference image blocks, dividing the target image into a plurality of target image blocks, wherein the target image blocks correspond to the reference image blocks one to one;
comparing the plurality of target image blocks with the corresponding reference image blocks to obtain the image quality detection results of the plurality of target image blocks;
and fusing the image quality detection results of the target image blocks to obtain the image quality detection result of the target image.
13. The method according to any one of claims 1 to 9, wherein the comparing the target image with the reference image to obtain the detection result of the picture quality of the target image comprises:
and calling a picture quality detection model, and comparing the target image with the reference image to obtain the picture quality detection result.
14. The method according to claim 13, wherein the picture quality detection model comprises a feature extraction network, a feature fusion network and a feature detection network, and the step of calling the picture quality detection model to compare the target image with the reference image to obtain the picture quality detection result comprises:
calling the feature extraction network, and performing feature extraction on the reference image and the target image to obtain a first picture quality feature of the reference image and a second picture quality feature of the target image;
calling the feature fusion network to fuse the first picture quality feature, the second picture quality feature and the difference feature between the first picture quality feature and the second picture quality feature to obtain a fusion feature;
and calling the feature detection network to detect the fusion feature to obtain the picture quality detection result.
15. The method of claim 14, wherein the feature extraction network comprises a first extraction layer and a second extraction layer, and wherein the invoking the feature extraction network to perform feature extraction on the reference image and the target image to obtain a first picture quality feature of the reference image and a second picture quality feature of the target image comprises:
calling the first extraction layer to extract the features of the reference image to obtain the first picture quality features;
and calling the second extraction layer to extract the features of the target image to obtain the second picture quality features.
16. The method according to claim 13, wherein the training process of the picture quality detection model comprises:
acquiring a sample reference image, a sample target image and a sample picture quality detection result of the sample target image;
calling the picture quality detection model, and comparing the sample target image with the sample reference image to obtain a picture quality detection result of the sample target image;
and training the picture quality detection model based on the picture quality detection result of the sample target image and the sample picture quality detection result.
17. A picture quality detection apparatus, characterized in that the apparatus comprises:
the information acquisition module is used for acquiring operation indication information;
the image generation module is used for respectively operating a reference application and a target application according to the operation indication information to obtain a reference image and a target image, wherein the reference image is generated in the operation process of the reference application, the target image is generated in the operation process of the target application, and the reference application and the target application are different versions of the same application;
and the picture quality detection module is used for comparing the target image with the reference image to obtain a picture quality detection result of the target image, and the picture quality detection result is used for representing the picture quality of the target application.
18. A computer device, characterized in that the computer device comprises a processor and a memory, wherein at least one computer program is stored in the memory, and the at least one computer program is loaded and executed by the processor to implement the picture quality detection method according to any one of claims 1 to 16.
19. A computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the picture quality detection method according to any one of claims 1 to 16.
20. A computer program product comprising a computer program, wherein the computer program is loaded and executed by a processor to implement the picture quality detection method according to any of claims 1 to 16.
CN202210010451.8A 2022-01-06 2022-01-06 Picture quality detection method, device, equipment and storage medium Pending CN114418972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210010451.8A CN114418972A (en) 2022-01-06 2022-01-06 Picture quality detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210010451.8A CN114418972A (en) 2022-01-06 2022-01-06 Picture quality detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114418972A true CN114418972A (en) 2022-04-29

Family

ID=81271943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210010451.8A Pending CN114418972A (en) 2022-01-06 2022-01-06 Picture quality detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114418972A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913413A (en) * 2016-03-31 2016-08-31 宁波大学 Objective colorful image quality evaluation method based on online manifold learning
US20170186147A1 (en) * 2015-12-23 2017-06-29 Vmware, Inc. Quantitative visual perception quality measurement for virtual desktops
CN109523533A (en) * 2018-11-14 2019-03-26 北京奇艺世纪科技有限公司 A kind of image quality evaluating method and device
CN109727246A (en) * 2019-01-26 2019-05-07 福州大学 Comparative learning image quality evaluation method based on twin network
CN113238972A (en) * 2021-07-12 2021-08-10 腾讯科技(深圳)有限公司 Image detection method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170186147A1 (en) * 2015-12-23 2017-06-29 Vmware, Inc. Quantitative visual perception quality measurement for virtual desktops
CN105913413A (en) * 2016-03-31 2016-08-31 宁波大学 Objective colorful image quality evaluation method based on online manifold learning
CN109523533A (en) * 2018-11-14 2019-03-26 北京奇艺世纪科技有限公司 A kind of image quality evaluating method and device
CN109727246A (en) * 2019-01-26 2019-05-07 福州大学 Comparative learning image quality evaluation method based on twin network
CN113238972A (en) * 2021-07-12 2021-08-10 腾讯科技(深圳)有限公司 Image detection method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹玉东 等: "基于深度学习的图像质量评价方法综述", 《计算机工程与应用》, 31 December 2021 (2021-12-31) *

Similar Documents

Publication Publication Date Title
CN110119815B (en) Model training method, device, storage medium and equipment
CN109091869B (en) Method and device for controlling action of virtual object, computer equipment and storage medium
WO2020224479A1 (en) Method and apparatus for acquiring positions of target, and computer device and storage medium
CN113395542B (en) Video generation method and device based on artificial intelligence, computer equipment and medium
CN111325699B (en) Image restoration method and training method of image restoration model
CN112272311B (en) Method, device, terminal, server and medium for repairing splash screen
CN110856048B (en) Video repair method, device, equipment and storage medium
CN112990390B (en) Training method of image recognition model, and image recognition method and device
CN113238972B (en) Image detection method, device, equipment and storage medium
CN110837858A (en) Network model training method and device, computer equipment and storage medium
CN113822322A (en) Image processing model training method and text processing model training method
CN113918767A (en) Video clip positioning method, device, equipment and storage medium
CN111598924B (en) Target tracking method and device, computer equipment and storage medium
CN111598923B (en) Target tracking method and device, computer equipment and storage medium
CN114282035A (en) Training and searching method, device, equipment and medium of image searching model
CN113763931A (en) Waveform feature extraction method and device, computer equipment and storage medium
CN114418972A (en) Picture quality detection method, device, equipment and storage medium
CN113569822B (en) Image segmentation method and device, computer equipment and storage medium
CN114328815A (en) Text mapping model processing method and device, computer equipment and storage medium
CN111259252B (en) User identification recognition method and device, computer equipment and storage medium
CN114404977A (en) Training method of behavior model and training method of structure expansion model
CN114240843A (en) Image detection method and device and electronic equipment
CN113709584A (en) Video dividing method, device, server, terminal and storage medium
CN113633970A (en) Action effect display method, device, equipment and medium
CN113515994A (en) Video feature extraction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination