CN114821240A - Abnormal image detection method and device, electronic equipment and storage medium - Google Patents

Abnormal image detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114821240A
CN114821240A CN202210514120.8A CN202210514120A CN114821240A CN 114821240 A CN114821240 A CN 114821240A CN 202210514120 A CN202210514120 A CN 202210514120A CN 114821240 A CN114821240 A CN 114821240A
Authority
CN
China
Prior art keywords
image
abnormal image
abnormal
application
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210514120.8A
Other languages
Chinese (zh)
Inventor
陈柯
胡志鹏
胡裕靖
吕唐杰
范长杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210514120.8A priority Critical patent/CN114821240A/en
Publication of CN114821240A publication Critical patent/CN114821240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method and a device for detecting an abnormal image, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into the first application bag body; acquiring a normal image sample, wherein the normal image sample is an image obtained by rendering the first application inclusion; and training the initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that the input image is a normal image or an abnormal image. According to the method and the device, the data quantity and the data diversity of the abnormal image sample are improved, so that the accuracy of detecting the abnormal image by the target detection model is improved.

Description

Abnormal image detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting an abnormal image, an electronic device, and a storage medium.
Background
Before a new version of an Application (APP) is online, a tester usually needs to detect a graphical user interface of the Application to determine whether the Application has a problem of abnormal display. Taking a game application program as an example, before a new version of the game application program is online, a tester needs to detect whether a game picture in the game application program is abnormal, for example, whether a picture is black or flower, a virtual element is abnormal, and a skill special effect is abnormal.
In the related technology, more and more mobile phone application manufacturers introduce an Artificial intelligence Algorithm (Artificial intelligence Algorithm) into the daily work flow, most of the manufacturers collect certain image data, learn the difference between normal/abnormal image samples by means of a neural network, analyze the implicit rule therein, and obtain a classifier for judging whether the interface display is normal or not. However, the number of the existing abnormal image samples is small, and the existing process of constructing the abnormal image samples is too empirical, so that the data diversity of the abnormal image samples is insufficient, and thus, the classification error of the classifier is easily caused due to the small number of the abnormal image samples and the insufficient data diversity, and the accuracy of the abnormal image is affected.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method, an apparatus, an electronic device and a storage medium for detecting an abnormal image, which improve the accuracy of detecting the abnormal image by a target detection model by improving the data amount and data diversity of an abnormal image sample.
In a first aspect, an embodiment of the present application provides a method for detecting an abnormal image, where the method includes:
obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into a first application bag body;
obtaining a normal image sample, wherein the normal image sample is an image obtained by rendering the first application inclusion;
training an initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that an input image is a normal image or an abnormal image.
In an optional embodiment of the present application, the question code is a code for generating a target image question, the target image question being a question related to a virtual camera, wherein the virtual camera is for capturing an image of a virtual scene.
In an optional embodiment of the present application, the target image problem comprises at least one of:
the method comprises the steps of setting a cache flag bit of a virtual camera, closing the virtual camera and post-processing a screen.
In an optional embodiment of the present application, the acquiring an abnormal image sample includes: acquiring images of different virtual scenes of the second application inclusion;
the acquiring of the normal image sample comprises: and acquiring images of different virtual scenes of the first application inclusion.
In an optional embodiment of the present application, the acquiring an abnormal image sample includes:
performing multiple screenshots on the image obtained by rendering the second application inclusion to obtain multiple abnormal image frames;
and screening the plurality of abnormal image frames to obtain an abnormal image sample.
In an optional embodiment of the present application, the screening the multiple abnormal image frames to obtain an abnormal image sample includes:
for any target abnormal image frame in the multiple abnormal image frames, carrying out similarity comparison on the target abnormal image frame and an adjacent last abnormal image frame to obtain similarity;
and if the similarity is larger than a preset threshold value, deleting the target abnormal image frame from the plurality of abnormal image frames, and forming the abnormal image sample by using the residual abnormal image frames.
In an optional embodiment of the present application, the screening the multiple abnormal image frames to obtain an abnormal image sample includes:
and screening a plurality of target abnormal image frames from any one of the plurality of abnormal image frames based on a preset interval frame number, and forming an abnormal image sample by the plurality of target abnormal image frames.
In an optional embodiment of the present application, the method further comprises:
when a first image in the abnormal image sample is input into the target detection model to obtain a second image, calculating the derivative of each pixel of the second image to the first image;
determining an anomalous pixel in the first image from the derivative of each pixel;
and marking the abnormal pixels in the first image, and obtaining and displaying the marked first image.
In a second aspect, an embodiment of the present application provides an apparatus for detecting an abnormal image, where the apparatus includes:
the abnormal image acquisition module is used for acquiring an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code to a first application bag body;
a normal image obtaining module, configured to obtain a normal image sample, where the normal image sample is an image obtained by rendering the first application packet;
and the model training module is used for training an initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the initial detection model is used for outputting the probability that the input image is a normal image or an abnormal image.
In a third aspect, an embodiment of the present application provides an electronic device, including: the image processing device comprises a processor, a memory and a bus, wherein the memory stores machine readable instructions executable by the processor, when the electronic device runs, the processor and the memory are communicated through the bus, and the processor executes the machine readable instructions to execute the abnormal image detection method.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for detecting an abnormal image is performed.
The embodiment of the application provides a method and a device for detecting an abnormal image, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into the first application bag body; acquiring a normal image sample, wherein the normal image sample is an image obtained by rendering the first application inclusion; and training the initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that the input image is a normal image or an abnormal image. According to the embodiment of the application, the problem code is added into the first application inclusion to obtain the second application inclusion, and the second application inclusion is rendered to obtain the abnormal image sample, so that the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for detecting an abnormal image according to an embodiment of the present application;
fig. 2 is a schematic interface diagram of an abnormal image according to an embodiment of the present disclosure;
FIG. 3 is a schematic interface diagram of another anomaly image provided in the embodiments of the present application;
FIG. 4 is a schematic interface diagram of another anomaly image provided in the embodiments of the present application;
fig. 5 is a schematic structural diagram of an abnormal image detection apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another abnormal image detection apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the rapid development of computer technology, mobile devices on the market are different day by day, operating systems of mobile devices of different models are different, especially, for mobile devices based on android, device manufacturers can optimize for native systems, so that fragmentation of the mobile devices is very serious, and how to automatically execute software compatibility tests on a large number of mobile devices becomes a big difficulty for current software manufacturers. Furthermore, before a new version of an Application (APP) is online, a tester usually needs to detect a graphical user interface of the Application to determine whether the Application has a problem of abnormal display. Taking a game application program as an example, before a new version of the game application program is online, a tester needs to detect whether a game picture in the game application program is abnormal, for example, whether a picture is black or flower, a virtual element is abnormal, and a skill special effect is abnormal. That is, different from traditional internet applications such as e-commerce and video, a game application designs a large number of animation scenes in order to attract more users, and 3d rendering is frequently triggered during game running, which results in that the problem of image display abnormality of the game application is more complicated than that of the traditional internet application and the representation form is not fixed, and through statistical analysis, the percentage of the image display abnormality in the problem of game application compatibility test is about 40%.
However, unlike compatibility problems such as flash back and installation failure, the application display exception problem is generally not reflected in the log of the mobile device operation, that is, even if an exception image occurs in the application operation process, software does not report an error, but the software needs to be judged by a person with a certain test experience, so that the problem cannot be judged by the log of the operation. In addition, because the images are abnormal in various types (such as a screen, a black edge, an abnormal color block and the like) and different game styles have huge differences, pixel statistics can not be directly carried out on the image layer to accurately judge all abnormal problems, more and more mobile phone application manufacturers introduce an Artificial intelligence Algorithm (Artificial intelligence Algorithm) into the daily work flow, most of the manufacturers can collect certain image data, the differences between normal/abnormal image samples are learned by means of a neural network, the implicit rules in the normal/abnormal image samples are analyzed, a classifier for judging whether interface display is normal can be obtained, and the display abnormal problems occurring in the application running process are identified by means of the classifier. However, since the machine learning algorithm requires a large amount of data to support, the classifier cannot learn the difference between the normal image sample and the abnormal image sample well due to the small number of abnormal image samples accumulated by the history, and therefore, a large number of abnormal image samples need to be generated by means of a certain data augmentation technology for the classifier to learn.
In the related art, most existing compatibility test schemes have two modes: the first method is a manual checking method, that is, a professional tester manually traverses an application scene (such as a game scene) to determine whether the graphical user interface display is normal or not according to experience. The second method is performed in an automated manner, for example, a designated test script is written, and then whether the graphical user interface display is normal is detected by a certain rule (for example, whether a large-area rectangular color block is included in an image) in the running process of the script.
Currently, more and more application manufacturers introduce Artificial intelligence algorithms (artifiacial Intelligent algorithms) into the daily workflow to reduce labor costs. Because the judgment of whether the graphical user interface display is normal in the compatibility test is a work which needs a large amount of test experience, and the artificial intelligence algorithm is introduced, the labor input in the test process can be greatly reduced, and the work efficiency is improved, most application manufacturers can collect certain image sample data, learn the difference between the normal/abnormal image samples by means of the neural network, analyze the implicit rule, and then obtain a classifier for judging whether the interface display is normal.
However, the compatibility test is performed by simply using a manual checking mode, and because the application scenes are numerous and the number of the mobile devices to be tested is not small, the manual test is time-consuming and labor-consuming. Taking a game application as an example, for a game, it takes at least one week to perform a compatibility test after each game content update. Therefore, more and more application manufacturers choose to perform compatibility testing in an automated manner, that is, the writing rule counts the image pixels, so as to judge whether the graphical user interface display is normal. However, because the types of the abnormal display of the graphical user interface are too many, a certain rule script needs to be written for each type, and because different game styles are different and different adaptation schemes exist for different types, once the generalization of the rule is not strong enough, the calculation mode of the corresponding script needs to be adjusted, so that certain cost is needed for maintenance. In addition, due to the fact that the types of the display anomalies of the graphical user interface are too many, the rules are difficult to cover all the anomaly images, and even if the anomaly is the same, the corresponding anomaly image samples are difficult to reproduce in percentage by using the rules.
From the above, most of the manufacturers collect certain image data, learn the difference between the normal/abnormal image samples by means of the neural network, and analyze the implicit rule therein to obtain a classifier for judging whether the interface display is normal or not. However, the number of the existing abnormal image samples is small, and the existing process of constructing the abnormal image samples is too empirical, so that the data diversity of the abnormal image samples is insufficient, and thus, the classification error of the classifier is easily caused due to the small number of the abnormal image samples and the insufficient data diversity, and the accuracy of the abnormal image is affected.
Based on this, the embodiment of the present application provides a method for detecting an abnormal image, and provides the following inventive concepts: rendering the first application bag body to obtain a normal image sample, adding a problem code into the first application bag body to obtain a second application bag body, rendering the second application bag body to obtain an abnormal image sample, training an initial detection model based on the obtained normal image sample and the abnormal image sample, and obtaining a trained target detection model, wherein the target detection model is used for outputting the probability that an input image is a normal image or an abnormal image. In the above manner, the problem code is added to the first application inclusion to obtain the second application inclusion, and the second application inclusion is rendered to obtain the abnormal image sample, so that the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved.
Referring to fig. 1, fig. 1 is a flowchart of a method for detecting an abnormal image according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101, obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into a first application bag body;
s102, obtaining a normal image sample, wherein the normal image sample is an image obtained by rendering the first application packet;
s103, training the initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that the input image is a normal image or an abnormal image.
According to the embodiment of the application, the problem code is added into the first application inclusion to obtain the second application inclusion, and the second application inclusion is rendered to obtain the abnormal image sample, so that the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved.
In step S101, the abnormal image refers to an abnormal image, the image is composed of a plurality of pixels, the abnormal image refers to an abnormal image, that is, some pixels on the image are abnormal, and the abnormal image is represented by: a flower screen, a green screen, a blue screen, water ripples, a black screen and no image; red light, blue light, stray light, light leakage and the like appear in the middle of the image; there are normal bright spots or normal black spots, damage spots, etc. in the middle of the image. In the embodiment of the present application, an abnormal image refers to an image in which an abnormality occurs in a graphical user interface of an application program, an abnormal image sample includes a large number of abnormal images, and the presentation form of the abnormality occurring in the graphical user interface includes, but is not limited to: the display effect of the skill special effect is abnormal due to the fact that the screen is black or patterned, the virtual element display is abnormal.
The application package body is an installation package for installing the application, and comprises all files for installing the application, including application program codes. In the embodiment of the application, the program codes in the application packet body are rendered by using the engine to obtain the corresponding image in the application operation. For example, the resource files corresponding to the virtual scene in the game are packaged into a resource packet body, and the resource packet body is rendered by using the game engine to obtain the image of the virtual scene in the game.
Because the application inclusion includes the application code, the embodiment of the application can obtain the second application inclusion after adding the problem code to the first application inclusion. Specifically, the manner of adding the problem code to the first application inclusion may include: firstly, problem codes are directly added into original program codes of a first application bag body, then the codes are compiled to generate new applications, but recompilation and packaging engineering are needed after the codes are modified each time, the time is long, and once the effect of the new codes is not the desired effect, the processes of adding the codes, compiling and packaging need to be repeated. Secondly, a hot fix (hot fix) technology is adopted, namely, the first application inclusion replaces the original program codes by using problem codes in a code patch mode, generally, a code segment is specified in the application program codes to bury an injection point (as all codes have problems, generally, the application buries the code injection point for all code segments to prevent accidents), an instruction and corresponding patch codes can be sent to the application through a server, and the newly written problem codes are used for replacing the original program codes. The hot fix technology does not need compilation engineering, is plug-and-play, and can withdraw and patch again even if the rendering effect is not ideal. The method and the device preferably adopt the hot fix technology to add the problem code into the first application bag body.
The question code is a code for generating a target image question, which is a question related to a virtual camera for capturing an image of a virtual scene.
Here, a plurality of virtual cameras may be disposed in the virtual scene of the network game, and the virtual cameras perform virtual shooting of the virtual scene from different perspectives to obtain a game picture. In particular, all resources required for the virtual camera tool to run are packaged into a game engine compatible resource package, which may be a Unity engine. All resources required by the operation of the camera tool are packaged into a Unity resource Package by using an Export Package function provided by the Unity engine, and when a virtual camera tool needs to be used, the resource Package can be imported by using an Import Package in the game engine, and the virtual camera tool is dragged into a virtual scene for capturing an image of the virtual scene.
Taking a game application written based on a Unity engine as an example, the embodiment of the application obtains three common reasons, wherein the three reasons are all related to a virtual camera in the Unity engine, the virtual camera is a device for capturing and displaying a virtual scene for a user in the Unity game, one virtual scene can be provided with a plurality of virtual cameras, the virtual cameras can be placed at any position on a screen, and different game pictures can be presented for the user by setting the positions and the properties of the virtual cameras.
Since the target image problem that causes the generation of the abnormal image may be repeated, here, only the non-repeated target image problem that causes the generation of these abnormal images is counted from among a plurality of abnormal images.
Based on the above description, in the embodiment of the present application, the target image problem for representing the virtual camera-related problem includes at least one of: the method comprises the following steps of setting a cache flag bit of a virtual camera, closing the virtual camera and post-processing a screen. These three target image problems are set forth in detail below:
in the first case: the target image problem includes a cache flag setting problem of the virtual camera. When the graphical user interface is abnormal, the target image problem is used for representing that the cache flag bit of the camera which does not meet the setting requirement exists in the software engine. Here, the buffer flag of the camera is used to clear a designated area of the screen (buffer frame), and is generally used when a plurality of cameras render different objects. If the current setting of the buffer flag bit does not meet the setting requirement, the object track of the history in the scene may be left, i.e. a pattern similar to frame superposition appears, as shown in fig. 2.
In the second case: the target image problem includes a virtual camera shutdown problem. When the graphical user interface is abnormal, the target image problem is used for representing that part of the cameras in the application scene are closed. Here, a camera is a tool for displaying objects in a scene to a user, and if a camera is turned off, the object captured by the camera cannot be displayed to the user, and the content of the part on the screen is replaced by other cameras, so that some noise pixels or speckle blocks may appear. As shown in fig. 3.
Thus, turning off the camera may cause the object responsible for capturing to be not displayed, and the content of the portion of the frame is replaced by another camera, and the portion of the camera in the application scene is turned off, which may cause abnormal frame display.
In the third case: the target image problem includes a problem of screen post-processing. When the graphical user interface is abnormal, the target image problem is used for representing that the historical screen post-processing special effect is not removed. Here, the screen post-processing means rendering a complete scene to obtain a screen image, and then processing the screen image to realize a screen special effect. By using the technology, more artistic effects can be added to the game picture, such as adding a filter, blurring, mirror image turning and the like to the image, and if the after-screen processing special effect is not cleared in time after leaving a certain specific scene (such as a battle death scene) or the code for clearing the special effect is invalid for some reasons, the special effects such as graying, blurring, snowflake and the like can be caused to appear in the current scene. As shown in fig. 4.
According to the method and the device, the problem codes corresponding to each target image problem can be respectively determined under the condition that the graphical user interface is abnormal. The following is a detailed description:
for the first case, when the graphical user interface is abnormal, the target image problem is used for representing that the cache flag bit of the camera which does not meet the setting requirement exists in the software engine. The step of determining the question code corresponding to the target image question may be as follows:
step 201, according to the cache flag bit of the camera meeting the setting requirement in the software engine, determining the original attribute of the removal mark corresponding to the cache flag bit of the camera meeting the setting requirement and the corresponding original cache code.
In step 201, a cache flag bit of a camera meeting a setting requirement in a software engine in a current scene is obtained, that is, the cache flag bit of the camera meeting the setting requirement can make a game scene appear normal.
Because the cache flag bit of the camera has the tag clearing attribute, the original tag clearing attribute corresponding to the cache flag bit of the camera meeting the setting requirement and the original cache code corresponding to the cache flag bit of the camera meeting the setting requirement can be determined. Here, the original tag removal attribute refers to a tag removal attribute that a game scene normally has, and the original cache code refers to a code of a cache flag of a camera corresponding to the game scene normally.
Specifically, the clear tag attribute cleartags refers to an attribute for setting camera cache in the unity game, and is responsible for managing a color cache value and a depth cache value of a camera, and when a plurality of cameras exist, the color cache value ColorBuffer and the depth cache value DepthBuffer of all the cameras in the front can be determined not to be cleared by setting the clear tag attribute value due to the fact that the cameras have rendering sequence.
Specifically, the clear flag attribute Clearflags includes 4, which are: solid Color, sky box Skybox, depth DepthOnly only, Don't Care without clean. Wherein, the pure Color Solid Color indicates that the depth cache values of all the front cameras are emptied, the Color cache values of all the front cameras are emptied, and the Color cache values of the front cameras are replaced by background colors; the sky box Skybox represents the depth cache value of all the cameras in front of the sky box and the color cache value of all the cameras in front of the sky box, and the color cache value of the cameras in front of the sky box is replaced by the color value of the sky box; only depth depthOnly indicates that all the depth cache values of the front cameras are cleared, and the color cache values of all the front cameras are reserved; not clearing Don't Care means retaining the depth buffer values and color buffer values of all previous cameras. The embodiment can make the game picture present different display exception problems by setting different clear flag attributes Clearflags.
Step 202, responding to the modification operation of the original cache code, and modifying the original mark removing attribute into other mark removing attributes to obtain a problem code for representing that the cache flag bit of the camera which does not meet the setting requirement exists in the software engine; wherein the other clear flag attributes are different from the original clear flag attributes.
In step 202, the cache flag bit of the camera represented by the original cache code is changed by modifying the code aiming at the original tag removal attribute in the original cache code. In one embodiment, the original purge flag attribute may be modified to other purge flag attributes, wherein the original purge flag attribute may include any of the following: pure color, sky box, depth only, no clean; further, other purge flag attributes may also include any of the following: solid color, sky box, depth only, no clean, but other clean mark attributes are different from the attributes of the original clean mark attributes.
It should be noted that, in the current game scene, the original clear flag attribute corresponding to the cache flag bit of the camera is one of a solid color, a sky box, only depth and no clear, at this time, it can be ensured that the game scene is presented normally, but if the game scene is abnormal, only the original clear flag attribute corresponding to the cache flag bit of the camera needs to be modified into other clear flag attributes, and the other clear flag attributes are any one of the four of the solid color, the sky box, only depth and no clear that is different from the original clear flag attribute, so that the problem code for representing the cache flag bit of the camera that does not meet the setting requirement in the software engine can be obtained. The problem code is used for representing that the cache flag bit of the camera corresponds to other clearing mark attributes.
In addition, when the original removing mark attribute corresponding to the cache zone bit of the camera is modified into other removing mark attributes, because the other removing mark attributes are not unique and the attributes of each other removing mark attribute are different, the types of the obtained abnormal images are also different, and the diversity of the abnormal image samples can be enriched.
For example, an original clear flag attribute corresponding to a cache flag bit of a camera meeting a setting requirement in a software engine is a sky box, and at this time, an original cache code of the cache flag bit of the camera corresponding to the sky box is acquired, where the original clear flag attribute is the sky box. When the sky box is modified to be only deep, the code segment representing the original clear mark attribute as the sky box in the original cache code is correspondingly modified, that is, the code segment representing the original clear mark attribute as the sky box is modified into the code segment representing the original clear mark attribute as the sky box, and the code segments except the code segment corresponding to the original clear mark attribute in the original cache code are not changed.
By way of example, one sample code is as follows:
local cameraobjs=CS.UnityEngine.Object.FindObjectsOfType(typeof(CS.UnityEngine.Camera))
for i=0,cameraobjs.Length-1
do
cameraobjs[i].clearFlags=UnityEngine.CameraClearFlags.DepthOnly
end
for the second case, when there is an anomaly in the graphical user interface, the target image problem is used to characterize that part of the camera within the application scene is turned off. The step of determining the question code corresponding to the target image question may be as follows:
step 301, determining a turn-on code for turning on a portion of cameras within an application scene.
In step 301, a start code for starting a part of cameras in an application scene in a current scene may be acquired from an installation package. For example, if there are 10 cameras in the current scene, the turn-on codes of the 10 cameras may be obtained, and the turn-on codes of the part of the cameras to be turned off are determined; it is also possible to directly acquire the turn-on code of only a part of the cameras and then turn off the part of the cameras.
And step 302, responding to the modification operation aiming at the opening code, and obtaining a problem code for representing that part of cameras in the application scene are closed.
In step 302, in order to turn off a part of cameras in an application scene, the turn-on code of the part of cameras may be modified into a turn-off code, where the turn-off code is a question code. Therefore, the part of the cameras can be turned off, so that objects captured by the turned-off part of the cameras cannot be displayed, the content of the part of the images is replaced by other cameras, and the images are displayed abnormally.
By way of example, one sample code is as follows:
local cameraobjs=CS.UnityEngine.Object.FindObjectsOfType(typeof(CS.UnityEngine.Camera))
for i=0,cameraobjs.Length-1
do
cameraobjs[i].enabled=false
end
for the third case, when there is an anomaly in the graphical user interface, the target image problem is used to characterize that the historical screen post-processing special effects are not cleared. The step of determining the question code corresponding to the target image question may be as follows:
step 401, determine the clear code for clearing the historical screen post-processing special effect.
In step 401, a clear code for clearing the historical post-screen processing special effect in the current scene may be obtained from the installation package.
And step 402, responding to the code writing operation, deleting the clearing code or invalidating the clearing code to obtain the problem code which is used for representing the history and has no clearing effect of the screen post-processing special effect.
In step 402, in order to delete or invalidate the clear code, the clear code may be directly modified to invalidate it; or a new code segment is added to delete or invalidate the clear code. And taking the newly added code or the modified clearing code as a problem code which represents the history and has no clearing effect of the after-screen processing.
Therefore, abnormal pictures (such as control mirror image turning) can occur in the current scene due to the fact that the screen post-processing special effects of certain specific scenes are not cleared in time, and therefore some abnormal images are obtained.
Compared with the screen post-processing special effect without cleaning the previous scene, the method is more convenient to directly add some post-processing special effects which cannot appear in the current scene, so that some screen post-processing special effects which cannot appear in the current scene are randomly added to the current scene, and some abnormal images are generated.
Specifically, the step of determining the question code corresponding to the target question factor may be as follows:
determining a clearing code for clearing a historical screen post-processing special effect;
responding to code writing operation, and generating codes for representing other screen post-processing special effects to obtain question codes; and the codes of other screen post-processing special effects are different from the codes of historical screen post-processing special effects corresponding to the clearing codes.
By way of example, one sample code is as follows:
local detectCamera=GameObject.Find("UICamera")
if(detectCamera~=nil)
then
detectCamera.gameObject:AddComponent(typeof(CameraFilterPack_3D_Mirror))
end
in an alternative embodiment, the step of adding the problem code to the first application packet in step S101 to obtain the first application packet includes:
step 1011, setting an injection point for each code segment in the plurality of code segments of the first application enclosure, to obtain a plurality of injection points corresponding to the first application enclosure.
Here, the injection point is where the injection can be performed, typically a connection to access a database. Through the access point, the code may be injected through the injection point to a location in the first application enclosure corresponding to the injection point.
Step 1012 determines a target injection point corresponding to the problem code from the plurality of injection points based on the problem code corresponding to the target image problem.
Here, a code segment corresponding to the problem code is found from the first application inclusion through the problem code corresponding to the target image problem, and then an injection point of the code segment is found, and the injection point of the code segment is determined as the target injection point corresponding to the problem code.
And 1013, inserting the problem code into the first application bag body through the target injection point in front of the original code corresponding to the problem code to obtain a second application bag body, so that the second application bag body runs the problem code.
And when the application program runs to the problem code in the second application inclusion, the original code is not loaded any more, and then the abnormal image caused by the problem code can be successfully generated.
In an alternative embodiment of the present application, step S101 includes: acquiring images of different virtual scenes of a second application inclusion;
the step S102 includes: and acquiring images of different virtual scenes of the first application inclusion.
Here, the first application inclusion and the second application inclusion respectively acquire the abnormal images and the normal images in different virtual scenes, so that the data diversity of the abnormal images and the normal images can be improved, the accuracy of the target detection model obtained through training of the abnormal image samples and the normal image samples is higher, and the accuracy of the target detection model for detecting the abnormal images is improved.
In the embodiment of the application, by generating the abnormal image in step S101, the problem code for generating the target image problem may be added to the first application inclusion to generate the second application inclusion, so that the second application inclusion generates the abnormal image corresponding to the target image problem in the process of running the problem code, and the data size of the abnormal image sample may be increased.
Further, regarding the manner of obtaining the abnormal image sample, step S101 specifically includes:
1014, performing multiple screenshots on the image obtained by rendering the second application inclusion to obtain multiple abnormal image frames;
and step 1015, screening the plurality of abnormal image frames to obtain an abnormal image sample.
Specifically, in the process of running the problem code by the second application inclusion, the abnormal image can be collected by recording the screen and performing multiple screenshots, so that an abnormal image sample is obtained.
For example, after the problem code is inserted into the first application bag body, the second application bag body is obtained, and in the process of running the problem code by the second application bag body, if the display on the graphical user interface is abnormal, the screen is recorded, and multiple screenshots are performed to obtain a plurality of abnormal image samples.
In step 1014, since the abnormal images of each scene need to be traversed, this embodiment uses an automatic click tool: the Monkey tool carries out pixel level random click on the graphical user interface, and as the problem codes are injected, the interface display is abnormal, the game running logic is not influenced, so that the interface jump can be carried out as long as the Monkey tool clicks the game control, and the abnormal image frames in various scenes can be collected. If the game interface does not jump for a long time after using the Monkey tool, it may be that the game falls into a scene that is very difficult to jump (for example, the scene jump may occur only when some specific characters are input in the text box), at this time, the embodiment has two methods to continue the data collection process: firstly, removing the patch file to enable the scene interface to be recovered to be normal, then manually jumping out of the scene, and then repeating the initial patching step to continue data collection. And secondly, restarting the application, and restarting the patching step so as to continue data collection.
Since training of a detection model for detecting an abnormal image requires both an abnormal image sample and a normal image sample, a series of normal image samples need to be collected in the embodiment of the present application. In step S102, a normal image sample is obtained, where the normal image sample is an image obtained by rendering the first application packet.
Specifically, in the process of running the first application inclusion, a plurality of normal images can be obtained by screenshot on the graphical user interface. Each screenshot corresponds to one frame of image.
Illustratively, after the application is opened, the Monkey tool is used for randomly clicking and recording the screen, the method is simple, and after normal image sample data is collected, whether the interface display in the image is normal can be confirmed manually.
When the abnormal image sample and the normal image sample are collected in the form of screen recording screenshot, a large number of images with high similarity can be obtained by directly extracting the images according to frames, so that further screening can be performed according to the image similarity, the situation that the sample repetition degree is too high can be prevented, and the final training effect of a detection model for detecting the abnormal images is not influenced. Because the video has continuity, the similarity of adjacent frames in the video is far higher than that of non-adjacent frames, and in order to improve the screening efficiency, the similarity of the target abnormal image and the adjacent previous abnormal image is only compared in the embodiment of the application.
In an alternative embodiment, in step 1015, the screening of the multiple abnormal image frames to obtain the abnormal image sample may include the following steps: and screening a plurality of target abnormal image frames from any one of the plurality of abnormal image frames based on the preset interval frame number, and forming an abnormal image sample by the plurality of target abnormal image frames.
Here, a plurality of target abnormal image frames are selected by a preset number of frame intervals starting from any one of the plurality of abnormal image frames to constitute an abnormal image sample.
For example, assuming that a video of 10 seconds has 200 abnormal image frames, the preset frame interval may be 20 frames, so that the target abnormal image frame selected from the 200 abnormal image frames may be the 1 st abnormal image frame, the 21 st abnormal image frame, the 41 st abnormal image frame … … the 181 th abnormal image frame, and the selected abnormal image frames are combined into the abnormal image sample.
However, in the above manner, in order to avoid that the similarity between adjacent target abnormal image frames in the selected target abnormal image frames is greater than the preset threshold, the preset interval frame number needs to be set to be larger, but the larger preset interval frame number may lose a part of abnormal image frames caused by different target image problems, based on this, in step 1015, the step of screening multiple abnormal image frames to obtain the abnormal image sample may include the following scheme:
10151, comparing the similarity of the target abnormal image frame with the adjacent previous abnormal image frame to obtain the similarity of any one target abnormal image frame in the plurality of abnormal image frames;
step 10152, if the similarity is greater than the preset threshold, deleting the target abnormal image frame from the plurality of abnormal image frames, and forming an abnormal image sample by the remaining abnormal image frames.
Step 10151 further comprises: sequentially selecting at least part of abnormal image frames from the plurality of abnormal image frames, and sequencing the selected at least part of abnormal image frames according to a time sequence to obtain an abnormal image frame sequence so as to select a target abnormal image frame from the abnormal image frame sequence; any two adjacent abnormal image frames in the abnormal image frame sequence are separated by a preset frame number; the preset frame number can be n frames, and n is more than or equal to 1.
Here, it is necessary to screen target abnormal image frames in the abnormal image frame sequence according to the similarity between the abnormal image frames separated by the preset number of frames, and filter some target abnormal image frames with relatively high similarity. The preset frame number can be n frames, and n is more than or equal to 1.
As an alternative embodiment, the step of determining an abnormal image frame satisfying the filtering requirement from the plurality of abnormal image frames includes:
sequentially selecting at least part of abnormal image frames from the plurality of abnormal image frames, and sequencing the selected at least part of abnormal image frames according to a time sequence to obtain an abnormal image frame sequence; any two adjacent abnormal image frames in the abnormal image frame sequence are separated by a preset frame number;
reserving a first abnormal image frame in the abnormal image frame sequence, sequentially selecting a target abnormal image frame from a second abnormal image frame in the abnormal image frame sequence, executing a cyclic process until a preset condition is met, and determining all the abnormal image frames reserved in the abnormal image frame sequence as the abnormal image frames meeting the screening requirement; the preset condition is that a target abnormal image frame selected from the abnormal image frame sequence is the last abnormal image frame in the abnormal image frame sequence;
the circulation process comprises the following steps:
determining the similarity between the target abnormal image frame and an abnormal image frame which is positioned on the last bit of the target abnormal image frame in the abnormal image frame sequence;
if the similarity is larger than a preset threshold value, deleting the target abnormal image frame from the abnormal image frame sequence;
and if the similarity is not greater than the preset threshold, reserving the target abnormal image frame.
It should be noted that the target abnormal image frame and the abnormal image frame of the previous bit of the target abnormal image frame are separated by a preset frame number n, where n is greater than or equal to 1.
For example, taking an example that a target abnormal image frame and an abnormal image frame one bit above the target abnormal image frame are 1 frame apart, assuming that there are 200 abnormal image frames in a 10-second video, the 200 abnormal image frames are processed as follows:
sequencing the 200 abnormal image frames in sequence according to a time sequence to obtain an abnormal image frame sequence; any two adjacent abnormal image frames in the abnormal image frame sequence are separated by 1 frame, wherein the time is the recording time displayed by each abnormal image frame when the video is recorded, the 200 abnormal image frames can be sequenced according to the sequence of the recording time from the front to the back, and the 200 abnormal image frames can also be sequenced according to the sequence of the recording time from the back to the front;
reserving a first frame abnormal image frame in the abnormal image frame sequence; sequentially selecting target abnormal image frames from a second frame of abnormal image frames in the abnormal image frame sequence, executing a cyclic process until a preset condition is met, and determining all the abnormal image frames reserved in the abnormal image frame sequence as the abnormal image frames meeting the screening requirement; the preset condition is that a target abnormal image frame selected from the abnormal image frame sequence is the last abnormal image frame in the abnormal image frame sequence; the circulation process comprises the following steps: determining the similarity between the target abnormal image frame and an abnormal image frame of a frame positioned above the target abnormal image frame in the abnormal image frame sequence; if the similarity is larger than a preset threshold value, deleting the target abnormal image frame from the abnormal image frame sequence; and if the similarity is not greater than the preset threshold, reserving the target abnormal image frame.
That is, a first frame of abnormal image frame in the abnormal image frame sequence is reserved, the similarity between the second frame of abnormal image frame and the first frame of abnormal image frame is calculated from the second frame of abnormal image frame, and if the similarity is greater than a preset threshold value, the second frame of abnormal image frame is deleted from the abnormal image frame sequence; continuously calculating the similarity between the third abnormal image frame and the first abnormal image frame, if the similarity is not greater than a preset threshold value, retaining the third abnormal image frame, next, calculating the similarity between the fourth abnormal image frame and the third abnormal image frame, and if the similarity is greater than the preset threshold value, deleting the fourth abnormal image frame from the abnormal image frame sequence; if the similarity is not greater than the preset threshold, reserving a fourth frame of abnormal image frame, and repeating the steps to determine all the abnormal image frames reserved in the abnormal image frame sequence as the abnormal image frames meeting the screening requirement, namely, all the remaining abnormal image frames in the abnormal image frame sequence can form an abnormal image sample.
Specifically, from the second frame of the video, the similarity between the current frame abnormal image frame x and the previous frame abnormal image frame y is calculated by using the formula:
Figure BDA0003638898720000181
wherein u is x Is the mean value of x, u y Is the average value of the values of y,
Figure BDA0003638898720000182
is the variance of x and is the sum of the differences,
Figure BDA0003638898720000183
variance of y, σ xy Is the covariance of x and y, c 1 =(k 1 L) 2 ,c 2 =(k 2 L) 2 ,k 1 Is a constant, generally taken as 0.01, k 2 Is a constant, typically 0.03, and L is the dynamic range of the gray scale. The similarity ranges from-1 to 1. When the two outlier image frames are identical, the value of SSIM is equal to 1. If the similarity between the current frame abnormal image frame and the previous frame abnormal image frame is greater than the preset threshold (the scheme is set to be 0.8), the similarity between the two abnormal image frames can be considered to be too high, and the current frame abnormal image frame does not need to be reserved. In addition, since the number of frames in the video recording process may be high (a mobile phone with better performance may reach 60 frames per second), in order to increase the operation speed, an optional embodiment of the present solution is that x and y may be separated by n frames, n is a fixed parameter, and the value of n may be adjusted according to the requirement.
In step S103, an initial detection model is trained based on the abnormal image sample and the normal image sample, and a trained target detection model is obtained, where the target detection model is used to output a probability that the input image is a normal image or an abnormal image.
Illustratively, after obtaining all normal image samples and abnormal image samples which can be used for training, constructing an image sample set containing a plurality of abnormal image samples and a plurality of normal image samples; the collected image sample set is transmitted to a pre-constructed neural network model (initial detection model) for training, the structure of the neural network model is not specifically limited and can be designed by self, the neural network model used in the embodiment is 4 layers of convolution and then is connected with 3 layers of full connection layers, and the dimensionality of the finally output detection result is 2.
Specifically, softmax operation is performed on the output vector of the neural network model to obtain the probability that the image is a normal/abnormal image:
Figure BDA0003638898720000184
then, the cross entropy loss of the classification result is calculated from the label L (normally 0, abnormal 1) of the sample:
loss ═ L × log (p (abnormal)) - (1-L) × log (p (normal));
and finally, calculating the gradient of each neuron by the neural network model according to the cross entropy, and updating the neural network model according to a chain rule.
In this way, after obtaining the trained target detection model, the target detection model may be used to perform anomaly detection on the image to be detected, and specific embodiments include:
and step 1031, acquiring an image to be detected obtained by screenshot of the graphical user interface.
Here, a plurality of images to be detected can be obtained by capturing a screen of the graphical user interface. Each screenshot corresponds to one frame of image.
And step 1032, inputting the image to be detected into the pre-trained target detection model, and outputting the probability that the image to be detected is a normal image or an abnormal image.
Through steps 1031 and 1032, the plurality of images to be detected are sequentially input into the target detection model which is trained in advance, so that the detection result of the target detection model is obtained. The detection result is used for representing the probability that the image to be detected is a normal image or an abnormal image.
Here, if the detection result for the image to be detected is: and determining that the video to be detected corresponding to the image to be detected is abnormal if the probability of the image to be detected being an abnormal image exceeds a preset probability threshold. And sending a reminding message under the condition that the video to be detected is an abnormal video so that technicians can find troubleshooting faults and find abnormal problems in time. The preset probability threshold is used for representing the probability critical value of the image to be detected as the abnormal image.
Illustratively, after a game is started, random click operation can be performed, after the click operation is performed each time, screen capture is performed on a current graphical user interface, a picture to be detected can be obtained after screen capture is completed each time, the picture to be detected is transmitted to a target detection model, if the target detection model judges that the picture to be detected is abnormal, the abnormal picture to be detected is stored, an abnormal area in the abnormal picture to be detected is analyzed, finally, the abnormal picture to be detected and the corresponding abnormal area are packaged and transmitted to a game developer to find a specific reason of the problem, the reason is corrected, and the effect of assisting the developer in rechecking is achieved.
Based on the above description, the method provided in the embodiment of the present application further includes:
step 501, when a first image in an abnormal image sample is input into an initial detection model to obtain a second image, calculating a derivative of each pixel of the second image to the first image;
step 502, determining abnormal pixels in the first image according to the derivative of each pixel;
here, if the derivative of a certain pixel is greater than the preset derivative threshold, the pixel is determined to be an abnormal pixel in the first image.
Step 503, marking the abnormal pixels in the first image, and obtaining and displaying the marked first image.
Here, the abnormal pixel in the first image is marked, and the first image with the abnormal pixel point mark can be obtained.
Furthermore, after the training of the target detection model is completed, the target detection model obtained in the embodiment can be introduced into a daily test to detect whether the graphical user interface display is normal.
Illustratively, firstly, a Monkey tool is used for randomly clicking in an application, a mobile phone interface is subjected to screenshot at intervals, then the screenshot is obtained, the picture is transmitted to a target detection model, a label of the picture is obtained through forward calculation of the target detection model, and if the output result of the target detection model shows that the picture is an abnormal picture, further abnormal analysis is carried out.
Specifically, the anomaly analysis scheme may use a method of directly calculating a derivative of a result y of an objective function corresponding to the target detection model to each pixel in the original image I:
Figure BDA0003638898720000201
the larger the derivative is, the more likely the pixel is to be an abnormal pixel, the positions of all the abnormal pixels are marked, a result analysis graph corresponding to the abnormal image can be obtained, and the problem image and the result analysis graph are recorded in a test report. Therefore, when the game developer rechecks, the abnormal image can be observed more clearly and visually, and the rechecking of the developer is assisted, so that the abnormal image can be modified.
According to the detection method of the abnormal image, the problem code is added into the first application inclusion to obtain the second application inclusion, and the second application inclusion is rendered to obtain the abnormal image sample, so that the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved. And when the target detection model is used for detecting the abnormal image, the image used for marking the abnormal pixel can be directly generated, so that the rechecking of developers is facilitated, and the target image problem occurring in the abnormal pixel is analyzed, tracked and modified.
Based on the same inventive concept, the embodiment of the present application further provides a device for detecting an abnormal image corresponding to the method for detecting an abnormal image, and since the principle of solving the problem of the device in the embodiment of the present application is similar to that of the method in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 5 and 6, fig. 5 is a schematic structural diagram of an abnormal image detection apparatus according to an embodiment of the present application. Fig. 6 is a schematic structural diagram of another abnormal image detection apparatus according to an embodiment of the present application. An embodiment of the present application provides an apparatus for detecting an abnormal image, as shown in fig. 5, the apparatus 500 includes:
an abnormal image obtaining module 501, configured to obtain an abnormal image sample, where the abnormal image sample is an image obtained by rendering a second application packet, and the second application packet is a packet obtained after adding a problem code to a first application packet;
a normal image obtaining module 502, configured to obtain a normal image sample, where the normal image sample is an image obtained by rendering the first application packet;
the model training module 503 is configured to train an initial detection model based on the abnormal image sample and the normal image sample, and obtain a trained target detection model, where the initial detection model is used to output a probability that the input image is a normal image or an abnormal image.
In an alternative embodiment of the present application, the question code is a code for generating a target image question, the target image question being a question related to a virtual camera, wherein the virtual camera is for capturing an image of a virtual scene.
In an optional embodiment of the present application, the target image problem comprises at least one of:
the method comprises the following steps of setting a cache flag bit of a virtual camera, closing the virtual camera and post-processing a screen.
In an optional embodiment of the present application, the abnormal image acquiring module 501 is specifically configured to: acquiring images of different virtual scenes of a second application inclusion;
the normal image acquisition module 502 is specifically configured to: and acquiring images of different virtual scenes of the first application inclusion.
In an optional embodiment of the present application, the abnormal image acquiring module 501 is specifically configured to:
performing multiple screenshots on an image obtained by rendering the second application inclusion to obtain multiple abnormal image frames;
and screening the plurality of abnormal image frames to obtain an abnormal image sample.
In an optional embodiment of the present application, the abnormal image acquiring module 501 is further specifically configured to:
for any target abnormal image frame in the multiple abnormal image frames, carrying out similarity comparison on the target abnormal image frame and the adjacent previous abnormal image frame to obtain similarity;
and if the similarity is greater than a preset threshold value, deleting the target abnormal image frame from the plurality of abnormal image frames, and forming an abnormal image sample by using the residual abnormal image frames.
In an optional embodiment of the present application, the abnormal image acquiring module 501 is further specifically configured to:
and screening a plurality of target abnormal image frames from any one of the plurality of abnormal image frames based on the preset interval frame number, and forming an abnormal image sample by the plurality of target abnormal image frames.
Further, as shown in fig. 6, the apparatus 500 further includes an abnormal pixel labeling module 504, where the abnormal pixel labeling module 504 is configured to:
when a first image in the abnormal image sample is input into the target detection model to obtain a second image, calculating a derivative of each pixel of the second image to the first image;
determining an anomalous pixel in the first image from the derivative of each pixel;
and marking abnormal pixels in the first image, and obtaining and displaying the marked first image.
According to the detection device for the abnormal image, the problem code is added to the first application inclusion to obtain the second application inclusion, the second application inclusion is rendered to obtain the abnormal image sample, the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic device 700 includes a processor 701, a memory 702, and a bus 703.
The memory 702 stores machine-readable instructions executable by the processor 701, the processor 701 and the memory 702 communicate via the bus 703 when the electronic device 700 is operating, and the processor 701 executes the machine-readable instructions to perform the steps of:
obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into the first application bag body;
acquiring a normal image sample, wherein the normal image sample is an image obtained by rendering the first application inclusion;
and training the initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that the input image is a normal image or an abnormal image.
In an alternative embodiment of the present application, the question code is a code for generating a target image question, the target image question being a question related to a virtual camera, wherein the virtual camera is for capturing an image of a virtual scene.
In an optional embodiment of the present application, the target image problem comprises at least one of:
the method comprises the following steps of setting a cache flag bit of a virtual camera, closing the virtual camera and post-processing a screen.
In an optional embodiment of the present application, when the processor 701 executes the step of obtaining the abnormal image sample, the following steps are specifically executed: acquiring images of different virtual scenes of a second application inclusion;
when the processor executes the step of acquiring the normal image sample, the following steps are specifically executed: and acquiring images of different virtual scenes of the first application inclusion.
In an optional embodiment of the present application, when the processor 701 performs acquiring an abnormal image sample, the following steps are specifically performed:
performing multiple screenshots on an image obtained by rendering the second application inclusion to obtain multiple abnormal image frames;
and screening the plurality of abnormal image frames to obtain an abnormal image sample.
In an optional embodiment of the present application, when the processor 701 performs the step of screening a plurality of abnormal image frames to obtain an abnormal image sample, the following steps are specifically performed:
for any target abnormal image frame in the multiple abnormal image frames, carrying out similarity comparison on the target abnormal image frame and the adjacent previous abnormal image frame to obtain similarity;
and if the similarity is greater than a preset threshold value, deleting the target abnormal image frame from the plurality of abnormal image frames, and forming an abnormal image sample by using the residual abnormal image frames.
In an optional embodiment of the present application, when the processor 701 performs the step of screening a plurality of abnormal image frames to obtain an abnormal image sample, the following steps are specifically performed:
and screening a plurality of target abnormal image frames from any one of the plurality of abnormal image frames based on the preset interval frame number, and forming an abnormal image sample by the plurality of target abnormal image frames.
In an alternative embodiment of the present application, the processor 701 further performs the following steps:
when a first image in the abnormal image sample is input into the target detection model to obtain a second image, calculating a derivative of each pixel of the second image to the first image;
determining an anomalous pixel in the first image from the derivative of each pixel;
and marking abnormal pixels in the first image, and obtaining and displaying the marked first image.
According to the embodiment of the application, the problem code is added into the first application inclusion to obtain the second application inclusion, and the second application inclusion is rendered to obtain the abnormal image sample, so that the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the following steps:
obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into the first application bag body;
acquiring a normal image sample, wherein the normal image sample is an image obtained by rendering the first application inclusion;
and training the initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that the input image is a normal image or an abnormal image.
In an alternative embodiment of the present application, the question code is a code for generating a target image question, the target image question being a question related to a virtual camera, wherein the virtual camera is for capturing an image of a virtual scene.
In an optional embodiment of the present application, the target image problem comprises at least one of:
the method comprises the following steps of setting a cache flag bit of a virtual camera, closing the virtual camera and post-processing a screen.
In an optional embodiment of the present application, when the computer-readable storage medium runs to obtain the abnormal image sample, the following steps are specifically performed: acquiring images of different virtual scenes of a second application inclusion;
when the computer readable storage medium runs to obtain a normal image sample, the following steps are specifically executed: and acquiring images of different virtual scenes of the first application inclusion.
In an optional embodiment of the present application, when the computer-readable storage medium runs to obtain the abnormal image sample, the following steps are specifically performed:
performing multiple screenshots on an image obtained by rendering the second application inclusion to obtain multiple abnormal image frames;
and screening the plurality of abnormal image frames to obtain an abnormal image sample.
In an optional embodiment of the present application, when the computer-readable storage medium is operated to screen a plurality of abnormal image frames to obtain an abnormal image sample, the following steps are specifically performed:
for any target abnormal image frame in the multiple abnormal image frames, carrying out similarity comparison on the target abnormal image frame and the adjacent previous abnormal image frame to obtain similarity;
and if the similarity is greater than a preset threshold value, deleting the target abnormal image frame from the plurality of abnormal image frames, and forming an abnormal image sample by using the residual abnormal image frames.
In an optional embodiment of the present application, when the computer-readable storage medium is operated to screen a plurality of abnormal image frames to obtain an abnormal image sample, the following steps are specifically performed:
and screening a plurality of target abnormal image frames from any one of the plurality of abnormal image frames based on the preset interval frame number, and forming an abnormal image sample by the plurality of target abnormal image frames.
In an alternative embodiment of the present application, the computer readable storage medium further performs the steps of:
when a first image in the abnormal image sample is input into the target detection model to obtain a second image, calculating a derivative of each pixel of the second image to the first image;
determining an anomalous pixel in the first image from the derivative of each pixel;
and marking abnormal pixels in the first image, and obtaining and displaying the marked first image.
According to the embodiment of the application, the problem code is added into the first application inclusion to obtain the second application inclusion, and the second application inclusion is rendered to obtain the abnormal image sample, so that the data quantity and the data diversity of the abnormal image sample are improved, the accuracy of the target detection model obtained through training of the abnormal image sample and the normal image sample is higher, and the accuracy of the target detection model for detecting the abnormal image is improved.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method for detecting an abnormal image, the method comprising:
obtaining an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code into a first application bag body;
obtaining a normal image sample, wherein the normal image sample is an image obtained by rendering the first application inclusion;
training an initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the target detection model is used for outputting the probability that an input image is a normal image or an abnormal image.
2. The method of claim 1, wherein the question code is a code for generating a target image question, the target image question being a question associated with a virtual camera, wherein the virtual camera is used to capture an image of a virtual scene.
3. The method of claim 2, wherein the target image problem comprises at least one of:
the method comprises the following steps of setting a cache flag bit of a virtual camera, closing the virtual camera and post-processing a screen.
4. The method of claim 1, wherein said obtaining an abnormal image sample comprises: acquiring images of different virtual scenes of the second application inclusion;
the acquiring of the normal image sample comprises: and acquiring images of different virtual scenes of the first application inclusion.
5. The method of claim 1, wherein said obtaining an abnormal image sample comprises:
performing multiple screenshots on the image obtained by rendering the second application inclusion to obtain multiple abnormal image frames;
and screening the plurality of abnormal image frames to obtain an abnormal image sample.
6. The method according to claim 5, wherein the screening the plurality of abnormal image frames to obtain an abnormal image sample comprises:
for any target abnormal image frame in the multiple abnormal image frames, carrying out similarity comparison on the target abnormal image frame and an adjacent previous abnormal image frame to obtain similarity;
and if the similarity is larger than a preset threshold value, deleting the target abnormal image frame from the plurality of abnormal image frames, and forming the abnormal image sample by using the residual abnormal image frames.
7. The method according to claim 5, wherein the screening the plurality of abnormal image frames to obtain an abnormal image sample comprises:
and screening a plurality of target abnormal image frames from any one of the plurality of abnormal image frames based on a preset interval frame number, and forming an abnormal image sample by the plurality of target abnormal image frames.
8. The method of claim 1, further comprising:
when a first image in the abnormal image sample is input into the target detection model to obtain a second image, calculating the derivative of each pixel of the second image to the first image;
determining an outlier pixel in the first image from the derivative of each pixel;
and marking the abnormal pixels in the first image, and obtaining and displaying the marked first image.
9. An apparatus for detecting an abnormal image, the apparatus comprising:
the abnormal image acquisition module is used for acquiring an abnormal image sample, wherein the abnormal image sample is an image obtained by rendering a second application bag body, and the second application bag body is obtained by adding a problem code to a first application bag body;
a normal image obtaining module, configured to obtain a normal image sample, where the normal image sample is an image obtained by rendering the first application packet;
and the model training module is used for training an initial detection model based on the abnormal image sample and the normal image sample to obtain a trained target detection model, wherein the initial detection model is used for outputting the probability that the input image is a normal image or an abnormal image.
10. An electronic device, comprising: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate through the bus when the electronic device runs, and the processor executes the machine-readable instructions to execute the method for detecting an abnormal image according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, executes the method for detecting an abnormal image according to any one of claims 1 to 8.
CN202210514120.8A 2022-05-11 2022-05-11 Abnormal image detection method and device, electronic equipment and storage medium Pending CN114821240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210514120.8A CN114821240A (en) 2022-05-11 2022-05-11 Abnormal image detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210514120.8A CN114821240A (en) 2022-05-11 2022-05-11 Abnormal image detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114821240A true CN114821240A (en) 2022-07-29

Family

ID=82514140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210514120.8A Pending CN114821240A (en) 2022-05-11 2022-05-11 Abnormal image detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114821240A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205952A (en) * 2022-09-16 2022-10-18 深圳市企鹅网络科技有限公司 Online learning image acquisition method and system based on deep learning
CN115391084A (en) * 2022-10-27 2022-11-25 北京蔚领时代科技有限公司 Intelligent solution method and system for cloud game abnormity
CN115944921A (en) * 2023-03-13 2023-04-11 腾讯科技(深圳)有限公司 Game data processing method, device, equipment and medium
CN117112446A (en) * 2023-10-16 2023-11-24 腾讯科技(深圳)有限公司 Editor debugging method and device, electronic equipment and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205952A (en) * 2022-09-16 2022-10-18 深圳市企鹅网络科技有限公司 Online learning image acquisition method and system based on deep learning
CN115391084A (en) * 2022-10-27 2022-11-25 北京蔚领时代科技有限公司 Intelligent solution method and system for cloud game abnormity
CN115944921A (en) * 2023-03-13 2023-04-11 腾讯科技(深圳)有限公司 Game data processing method, device, equipment and medium
CN117112446A (en) * 2023-10-16 2023-11-24 腾讯科技(深圳)有限公司 Editor debugging method and device, electronic equipment and medium
CN117112446B (en) * 2023-10-16 2024-02-02 腾讯科技(深圳)有限公司 Editor debugging method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN114821240A (en) Abnormal image detection method and device, electronic equipment and storage medium
US11276162B2 (en) Surface defect identification method and apparatus
US20140189576A1 (en) System and method for visual matching of application screenshots
CN1875378B (en) Object detection in images
CN108683907A (en) Optics module picture element flaw detection method, device and equipment
US10642726B2 (en) Method, apparatus, and system for blaming a test case/class for a survived mutation
KR20200108609A (en) Learning-data enhancement device for machine learning model and method for learning-data enhancement
CN111160261A (en) Sample image labeling method and device for automatic sales counter and storage medium
CN112785572B (en) Image quality evaluation method, apparatus and computer readable storage medium
CN110009621A (en) One kind distorting video detecting method, device, equipment and readable storage medium storing program for executing
CN115525563A (en) Test method, test device, computer equipment and storage medium
CN111932596A (en) Method, device and equipment for detecting camera occlusion area and storage medium
CN114494168A (en) Model determination, image recognition and industrial quality inspection method, equipment and storage medium
CN114064974A (en) Information processing method, information processing apparatus, electronic device, storage medium, and program product
Jacob et al. A non-intrusive approach for 2d platform game design analysis based on provenance data extracted from game streaming
CN111612681A (en) Data acquisition method, watermark identification method, watermark removal method and device
CN117237371A (en) Colon histological image gland segmentation method based on example perception diffusion model
CN105354833B (en) A kind of method and apparatus of shadow Detection
CN116167910B (en) Text editing method, text editing device, computer equipment and computer readable storage medium
CN116824488A (en) Target detection method based on transfer learning
CN113596354A (en) Image processing method, image processing device, computer equipment and storage medium
CN111078541A (en) Automatic katon detection method and system based on Unity engine
CN111652201B (en) Video data abnormity identification method and device based on depth video event completion
CN112131418A (en) Target labeling method, target labeling device and computer-readable storage medium
Nadarajan et al. A knowledge-based planner for processing unconstrained underwater videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination