CN110750193B - Scene topology determination method and device based on artificial intelligence - Google Patents

Scene topology determination method and device based on artificial intelligence Download PDF

Info

Publication number
CN110750193B
CN110750193B CN201910989250.5A CN201910989250A CN110750193B CN 110750193 B CN110750193 B CN 110750193B CN 201910989250 A CN201910989250 A CN 201910989250A CN 110750193 B CN110750193 B CN 110750193B
Authority
CN
China
Prior art keywords
control
scene
display image
tested
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910989250.5A
Other languages
Chinese (zh)
Other versions
CN110750193A (en
Inventor
杨丽
单少波
岑恩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910989250.5A priority Critical patent/CN110750193B/en
Publication of CN110750193A publication Critical patent/CN110750193A/en
Application granted granted Critical
Publication of CN110750193B publication Critical patent/CN110750193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a scene topology determining method and device based on artificial intelligence, which at least relate to various technologies in artificial intelligence, and determine the scene topology by adopting a mode of displaying images based on real-time analysis of functional services to be tested. According to the function service to be tested, which needs to determine scene topology, according to a first display image of the function service to be tested, first scene information corresponding to the first display image is determined, and a control in the first display image is identified. And generating a corresponding control instruction aiming at the identified first control to indicate that the first control is triggered through the functional service to be tested, so as to obtain a second display image generated in the functional service to be tested based on the trigger, and after determining the scene information of the second display image, determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control. The method does not influence the normal work of the functional service to be tested, and has strong adaptability and compatibility.

Description

Scene topology determination method and device based on artificial intelligence
Technical Field
The present application relates to the field of data processing, and in particular, to a scene topology determination method and apparatus based on artificial intelligence.
Background
The intelligent device can provide various functional services for users, such as games, online shopping, electronic payment and the like. One functional service may include a plurality of scenarios, and different scenarios may provide different detailed functions for the user. Taking a game as an example, a login scene provides a function of selecting a role for a user, a battle scene provides a function of fighting with different objects for the user, and the like.
In the process of running the functional service, the scenes can be switched through the control provided by the functional service, and the control belongs to a controllable module, such as a virtual key, an article and the like. The scene relation determined by the control can be called as scene topology, and if the scene topology corresponding to the functional service can be determined, various automatic operations, such as automatic tests and the like, can be realized on the functional service.
For the requirement of determining the scene topology, a related technology adopts a Software Development Kit (SDK), and calls an interface of a functional service to be tested through the SDK to obtain a control attribute, so as to analyze and determine the scene topology. However, the manner of calling the interface through the SDK may affect the normal operation of the functional service to be tested, and the interface parameters of different functional services are often greatly different, so that the SDK parameters need to be reconfigured for different functional services, and the applicability and compatibility are poor.
Disclosure of Invention
In order to solve the technical problems, the application provides a scene topology determining method and device based on artificial intelligence, which do not affect the normal work of the function service to be tested like the related technology, do not need to reconfigure parameters for different function services to be tested, and have strong adaptability and compatibility.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application provides a scene topology determining method based on artificial intelligence, where the method includes:
determining first scene information corresponding to a first display image according to the first display image of a function service to be detected;
identifying a control in the first display image, wherein the control is a controllable module; the identified controls include a first control;
acquiring a second display image through a control instruction generated according to the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
determining second scene information corresponding to the second display image;
and determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control.
In a second aspect, an embodiment of the present application provides an artificial intelligence-based scene topology determining apparatus, where the apparatus includes a determining unit, a recognizing unit, and an obtaining unit:
the determining unit is used for determining first scene information corresponding to a first display image according to the first display image of the functional service to be detected;
the identification unit is used for identifying a control in the first display image, and the control is a controllable module; the identified controls include a first control;
the acquisition unit is used for acquiring a second display image according to the control instruction generated by the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
the determining unit is used for determining second scene information corresponding to the second display image;
the determining unit is configured to determine a scene topology of the functional service to be tested according to an association relationship among the first scene information, the second scene information, and the first control.
In a third aspect, an embodiment of the present application provides an apparatus for artificial intelligence based scene topology determination, where the apparatus includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the artificial intelligence based scene topology determination method of any of the first aspect according to instructions in the program code.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium for storing program code for executing the artificial intelligence based scene topology determination method according to any one of the first aspect.
According to the technical scheme, the scene topology is determined by adopting a mode of analyzing the display image of the functional service to be tested in real time. According to the function service to be tested, which needs to determine scene topology, according to a first display image of the function service to be tested, first scene information corresponding to the first display image is determined, and a control in the first display image is identified. And generating a corresponding control instruction aiming at the identified first control to indicate that the first control is triggered through the functional service to be tested, so as to obtain a second display image generated in the functional service to be tested based on the trigger, and after determining the scene information of the second display image, determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control. The display image of the functional service to be tested is obtained in real time, so that the normal work of the functional service to be tested is not influenced like the related technology, the parameters do not need to be reconfigured for different functional services to be tested, and the adaptability and the compatibility are strong.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is an application scenario diagram of a scenario topology determination method according to an embodiment of the present application;
fig. 2 is a flowchart of a scene topology determining method according to an embodiment of the present application;
fig. 3 is a schematic view of a scene tree structure of a scene topology according to an embodiment of the present application;
fig. 4 is a schematic view of a scene tree structure obtained from a first display image traversal control according to an embodiment of the present application;
FIG. 5 is an overall flowchart illustrating a control traversal through an automated traversal algorithm according to an embodiment of the present disclosure;
fig. 6 is a specific flowchart of control traversal through an automated traversal algorithm according to an embodiment of the present application;
fig. 7 is a schematic view of a scene for determining a first control in a first display image according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an effect of obtaining an invariant region for two history display images according to an embodiment of the present disclosure;
fig. 9a is a schematic diagram of an image recognized by a character recognition sub-model and then output according to an embodiment of the present application;
fig. 9b is a schematic diagram of a training sample image obtained based on control image region combination according to an embodiment of the present application;
fig. 9c is a schematic diagram of an output result corresponding to a control image region combination training sample according to an embodiment of the present application;
fig. 10 is a flowchart of a method for creating a scene node in a scene tree according to an embodiment of the present application;
fig. 11 is a schematic flowchart of determining an index parameter according to an embodiment of the present disclosure;
fig. 12a is a schematic overall system architecture diagram of a method for executing scene topology determination according to an embodiment of the present application;
fig. 12b is a schematic flowchart of a scene topology determining method according to an embodiment of the present application;
fig. 13a is a structural diagram of an artificial intelligence based scene topology determination apparatus according to an embodiment of the present application;
fig. 13b is a structural diagram of an artificial intelligence based scene topology determination apparatus according to an embodiment of the present application;
fig. 13c is a structural diagram of an artificial intelligence based scene topology determination apparatus according to an embodiment of the present application;
fig. 14 is a block diagram of a scene topology determining apparatus based on artificial intelligence according to an embodiment of the present disclosure;
fig. 15 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
At present, for a to-be-tested function service requiring determination of scene topology, a mode of calling a function service interface by an SDK to acquire a control attribute is mainly adopted for determination. Because the mode of calling the interface through the SDK can influence the normal work of the functional service to be tested, and the interface parameters of different functional services are often greatly different, the SDK parameters need to be reconfigured aiming at different functional services, and the applicability and the compatibility are poor.
Therefore, the embodiment of the application provides a method for determining scene topology, which determines the scene topology for the functional service to be tested in a manner that a display image of the functional service to be tested is expected to be analyzed in real time, so as to avoid calling an interface of the functional service to be tested, and thus normal operation of the functional service to be tested is not affected.
The scene topology determination method, the corresponding control identification model and the training method of the scene identification model provided by the embodiment of the application can be realized based on Artificial Intelligence (AI), which is a theory, a method, a technology and an application system for simulating, extending and expanding human Intelligence by using a digital computer or a machine controlled by the digital computer, sensing environment, acquiring knowledge and acquiring the best result by using the knowledge. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the embodiment of the present application, the artificial intelligence software technology mainly involved includes the computer vision technology, the natural language processing technology, the deep learning and other directions.
For example, the present invention may relate to Image Processing (Image Processing), Image Semantic Understanding (ISU), video Processing (video Processing), Video Semantic Understanding (VSU), three-dimensional object reconstruction (3D object reconstruction), face recognition (face recognition), and the like in Computer Vision (Computer Vision).
For example, Deep Learning (Deep Learning) in Machine Learning (ML) may be involved, including various types of artificial neural networks (artificial neural networks).
First, an application scenario of the embodiment of the present application is described. The scene topology determining method can be applied to data processing equipment, such as terminal equipment and a server. The method can be applied to terminal equipment for deploying the service with the function to be tested, and the terminal equipment can be, for example, an intelligent terminal, a computer, a Personal Digital Assistant (PDA for short), a tablet computer and the like.
The scene topology determining method can also be applied to a server, the server is equipment for providing the scene topology determining service for terminal equipment with the function service to be detected, the terminal equipment can upload a display image of the function service to be detected to the server, and the server determines the scene topology for the function service to be detected by using the scene topology determining method provided by the embodiment of the application. The server may be an independent server or a server in a cluster.
The data processing equipment can have the capability of implementing a computer vision technology, the computer vision is a science for researching how to enable a machine to see, and in particular, the computer vision is used for replacing human eyes to identify, track and measure a target and the like, and further performing graphic processing, so that the computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
In the embodiment of the application, the data processing device can perform character recognition on the image through a computer vision technology, so as to determine a control containing character data in the image, and determine information such as a control type corresponding to the control based on the character data in the control.
The data processing apparatus may be provided with Machine Learning (ML) capabilities. ML is a multi-field interdisciplinary, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks.
In the embodiment of the present application, a method for recognizing a scene type of an image and a method for recognizing a control type in an image mainly relate to application of various artificial Neural Networks, for example, a Convolutional Neural Network (CNN) model is obtained through training to recognize a scene type of an image.
In order to facilitate understanding of the technical solution of the present application, the scene topology determining method provided in the embodiment of the present application is introduced below with reference to an actual application scene.
Referring to fig. 1, fig. 1 is an application scenario schematic diagram of a scenario topology determining method provided in an embodiment of the present application. As shown in fig. 1, it includes a terminal 101 and a server 102. In this embodiment of the present application, a function service to be tested that needs to determine a scene topology may be deployed in the terminal 101, and the server 102 may execute the method for determining a scene topology provided in this embodiment of the present application.
The scene topology of the functional service may be an association relationship between scenes of the functional service based on switching and a relationship between the scenes by which control the scene switching is realized. For example: the scenario topology for this functional service of the game may include: the "login game" scenario is associated with the "select role" scenario via the "select role" control. That is, in the image of the game corresponding to the "login game" scene, by triggering the "select character" control therein, the image corresponding to the "login game" scene can be switched to the image corresponding to the "select character" scene.
Based on this, the method for the server 102 to determine the scene topology for the functional service to be tested in the terminal 101 may include: in the process of running the function service to be tested by the terminal 101, the server 102 may obtain a current display image of the function service to be tested, and record the display image as a first display image. Note that the first display image, the second display image, the third display image, and the like in the embodiment of the present application may be one display image.
Next, the server 102 may determine corresponding scene information for the first display image, which may be denoted as first scene information. The scene information corresponding to the display image may be used to identify the scene corresponding to the display image.
The server 102 may also identify at least one control from the first display image and treat one of the controls as a first control. Then, a corresponding control instruction is generated according to the first control, and the control instruction is sent to the terminal 101.
In the embodiment of the present application, the control may be a controllable module, for example, a virtual key, an article, and the like. The control is triggered in a corresponding control mode, so that the function corresponding to the control can be realized, the display image is changed, and scene switching occurs. The control can be controlled by clicking, dragging, and the like.
After receiving the control instruction, the terminal 101 simulates an operation mode of the first control through the to-be-tested function service according to the control instruction of the first control, so as to trigger the first control, thereby implementing a function corresponding to the first control. In a specific implementation, the control instruction may indicate position coordinates and a corresponding operation manner of the first control.
Therefore, after the first control is triggered, the display image of the functional service to be tested is changed into a second display image and uploaded to the server 102.
The server 102 may determine scene information corresponding to the second display image, which is denoted as second scene information. Therefore, the server 102 may determine that the scene corresponding to the first display image of the function service to be tested is switched to the scene corresponding to the second display image through the first control according to the incidence relation among the first scene information, the second scene information, and the first control. Further, the server 102 may determine, according to the association relationship, a topological relationship between the scenes (the scene corresponding to the first display image, the scene corresponding to the second display image, and the first control) in the scene topology of the functional service to be tested.
The following describes the scene topology determining method by taking the function service to be tested as a game as an example.
Assuming that the display image of the game is the first display image of the character battle in the game during the game running of the terminal 101, the server 102 may determine corresponding first scene information according to the first display image, and the first scene information may identify a scene of the first display image corresponding to the game battle.
The server 102 identifies a control in the first display image and takes the identified "exit from game" control as the first control. The server 102 generates a corresponding control instruction according to the control of "quitting the game", and sends the control instruction to the terminal 101.
After receiving the control instruction, the terminal 101 may enable the game to trigger the "exit game" control by simulating a click mode according to the control instruction, so that the display image is changed into a display image (i.e., a second display image) of the game login interface, and the server 102 determines corresponding second scene information according to the second display image, where the second scene information may be used to identify a scene in which the second display image corresponds to the game login.
The server 102 may determine a corresponding scene topology according to an association relationship between the game battle scene identified by the first scene information, the game login scene identified by the second scene information, and the "quit game" control.
By executing the method, the display image of the functional service to be tested is used as the first display image for analysis, so that the complete scene topology of the functional service to be tested can be determined.
Because the display image of the functional service to be tested is obtained in real time, the normal work of the functional service to be tested is not influenced like the related technology, the parameters do not need to be reconfigured for different functional services to be tested, and the adaptability and the compatibility are strong.
Next, a terminal is taken as the foregoing data processing device as an example, and a scene topology determining method provided by the embodiment of the present application is described with reference to the drawings. Wherein, the terminal is deployed with the service with the function to be tested.
Referring to fig. 2, the figure shows a flowchart of a scene topology determining method provided in an embodiment of the present application, which is applied to a terminal and can be executed by a terminal processor, and the method includes:
s201: according to a first display image of the function service to be tested, first scene information corresponding to the first display image is determined.
Next, a method provided in the embodiment of the present application will be described by taking a function service of a game as an example.
The first display image may be a display image during the running of the game, and after the terminal acquires the first display image of the game, the terminal may determine the corresponding first scene information based on the first display image. For example, assume that a scene of a login game is shown in the first display image, and thus, the determined first scene information may include information on "a login game scene".
S202: identifying a control in the first display image, the control being a manipulable module.
Wherein, based on the example in S201, the control in the first display image may be, for example, a "character selection" control, a "game exit" control, and the like in the game.
After the terminal acquires the first display image, the control in the first display image can be identified, and one control in the first display image is determined as the first control. For example: assuming that the "character selection" control and the "exit game" control are included in the first display image, the terminal may determine one of the controls, such as the "character selection" control, as the first control after recognizing them.
S203: and acquiring a second display image through a control instruction generated according to the first control.
For the control instruction such as the control instruction generated by the first control in the embodiment of the application, the control instruction may be used to instruct the functional service to be tested to simulate the control mode of the corresponding control, so as to trigger the functional service to be tested in the display screen of the functional service to be tested. The control instruction can indicate the position coordinates and the control mode of the corresponding control in the display picture.
For example: for the first control of "role selection", the control mode may be a click operation mode, and the position coordinate of the first control in the first display screen is M. The control instruction corresponding to the first control generated based on the information can instruct that a click operation is simulated at the position with the coordinate M in the first display screen through the game so as to trigger the 'character selection' control at the position. After triggering the "character selection" control of the game, the display image of the game may be changed from the display image (corresponding to the first display image) of the registered game to the display image of the selected character, and the display image may be used as the second display image.
S204: and determining second scene information corresponding to the second display image.
It can be understood that the change process of the display image of the game is the process of switching the game scene, and the game scene is switched from the login game scene to the selection role scene. Based on this, the terminal can determine the second scene information corresponding to the second display image, that is, the display image of the selected character, as the second scene information. For example: information on "select character scene" may be included in the second scene information.
S205: and determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control.
Thus, the terminal can determine that the login game scene (corresponding to the first scene information) and the selected character scene (corresponding to the second scene information) are switched by the character selection control. Then, based on the association relationship, the terminal can generate an association relationship between the login game scene, the selected role scene and the "role selection" control in the scene topology.
In specific implementation, when determining the scene topology for the functional service to be tested, the terminal may generate a corresponding scene tree structure, so as to facilitate storage of the scene topology. The incidence relation among the scenes in the scene topology can be represented by the incidence relation among the nodes in the scene tree structure. As such, the nodes in the scene tree may be referred to as scene nodes. In the embodiment of the application, the display image and the corresponding scene information of the functional service to be tested can be stored in the scene node in the scene tree, so that the scene corresponding to the scene node can be embodied by the display image and the scene information.
For example, referring to fig. 3, which shows a scene tree structure diagram of a scene topology provided in the embodiment of the present application, after passing through S201-S205, as shown in fig. 3, it is assumed that a first display image and first scene information (corresponding to a login game scene) may be stored at a position of a scene node 3 in the scene tree, and a second display image and second scene information (corresponding to a selection role scene) may be stored at a position of a scene node x in the scene tree, in which the scene node 3 is extended to the scene node x by a first control (a "role selection" control), the scene node 3 may be a parent node of the scene node x, and the scene node x may be a child node of the scene node 3. In this way, the scene corresponding to the scene node 3 (corresponding to the login game scene) can be switched to the scene corresponding to the scene node x (corresponding to the selected role scene) by triggering the first control (corresponding to the "role selection" control) through the scene tree structure.
According to the technical scheme, the scene topology is determined by adopting a mode of analyzing the display image of the functional service to be tested in real time. According to the function service to be tested, which needs to determine scene topology, according to a first display image of the function service to be tested, first scene information corresponding to the first display image is determined, and a control in the first display image is identified. And generating a corresponding control instruction aiming at the identified first control to indicate that the first control is triggered through the functional service to be tested, so as to obtain a second display image generated in the functional service to be tested based on the trigger, and after determining the scene information of the second display image, determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control. The display image of the functional service to be tested is obtained in real time, so that the normal work of the functional service to be tested is not influenced like the related technology, the parameters do not need to be reconfigured for different functional services to be tested, and the adaptability and the compatibility are strong.
It can be understood that one display image of the functional service to be tested may include a plurality of controls, and the control mode corresponding to each control is adopted to trigger the control, so that the control can implement a corresponding function in the display image, thereby causing the display image to change and scene switching to occur. As such, in S202, a situation may arise where multiple controls are identified from the first display image. In this case, in order to ensure that a more complete and comprehensive scene topology is determined for the functional service to be tested, in a possible implementation manner, for the first display image, the terminal should traverse all the controls therein to determine the complete scene topology.
Based on this, after S202, i.e. identifying the control in the first display image, the method further comprises:
s301: and judging whether the first display image comprises an un-triggered control, and if the second control is determined to be the un-triggered control, executing S302.
After identifying the control in the first display image, the terminal may further determine whether the first display image includes an un-triggered control. If the control is not triggered, the control is marked as a second control, and S302 is executed.
For example: based on the foregoing examples of S201-S205, if it is determined that the "exit game" control in the first display image (corresponding to the login game scene) has not been triggered, the terminal may determine that the "exit game" control is the second control.
S302: and acquiring a third display image according to the control instruction generated by the second control.
Correspondingly, a corresponding control instruction can also be generated according to the second control. The control instruction generated by the second control can be used for instructing to simulate the control mode corresponding to the second control through the to-be-tested functional service so as to trigger the to-be-tested functional service in the first display picture of the to-be-tested functional service. The control instruction can indicate the position coordinate and the control mode corresponding to the second control.
In a specific implementation, the current display image of the functional service to be tested may no longer be the first display image, for example: the first control is triggered in a simulation mode on a first display image of the functional service to be tested, so that the display image of the current functional service to be tested is a second display image. In this way, the terminal should also trace back the display image of the functional service to be tested to the first display image, so that the second control can be simulated and triggered in the first display image of the functional service to be tested.
Based on this, the control instruction at the second control may also instruct the function service to be tested to return its current display image to the control instruction of the first display image.
After the second control is triggered, the second control realizes the corresponding function, so that the first display image of the functional service to be tested is changed, and scene switching occurs. The display image after the functional service to be tested is changed can be acquired as a third display image.
For example, the following steps are carried out: based on the foregoing example, after determining the second control of the "exit game" control, the terminal assumes that the manipulation manner of the second control of the "exit game" is a click operation manner, and the position coordinate of the second control of the "exit game" in the first display screen is N. If the display image of the current game is the second display image (corresponding to the selected role scene), the determined control instruction of the second control may indicate that the second display image (corresponding to the selected role scene) of the game is traced back to the first display image (corresponding to the logged-in game scene) through the game, and a click operation is simulated through a position where the coordinate of the game in the first display image is N, so as to trigger the second control located at the position (the "exit game" control).
Therefore, after the second control is triggered, the display interface of the game can be changed from the first display interface (corresponding to the game login scene) to the display image (main interface) of the game deployed main body after the game exits, the display image can be a third display image, and the third display image can correspond to the game exiting scene.
S303: and determining third scene information corresponding to the third display image.
Based on the foregoing example, the terminal may determine third scene information corresponding to the third display image, and the determined third scene information may include information about "exit game scene".
S304: and determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the third scene information and the second control.
Therefore, the terminal can determine the association relationship among the first scene information (corresponding to the login game scene), the third scene information (corresponding to the exit game scene) and the second control (the "exit game" control), and accordingly add the association relationship among the scenes in the scene topology.
In a specific implementation, based on the foregoing example corresponding to fig. 3, refer to fig. 4, which illustrates a scene tree structure diagram obtained from a first display image traversal control according to an embodiment of the present application. As shown in fig. 4, the terminal may save a third display image and third scene information (corresponding to the exit game scene) at the position of scene node y in the scene tree, wherein the scene node 3 is extended to the scene node y through the second control (the "exit game" control). Scene node 3 may be a parent node of scene node y, which may be referred to as a child node of scene node 3. Thus, by using the scene tree structure, it can be shown that the scene (corresponding to the role selection scene) corresponding to the scene node 3 is switched to the scene (corresponding to the game quitting scene) corresponding to the scene node y by triggering the second control (corresponding to the game quitting control).
In addition, it can be understood that when the first display image simulates and triggers the first control, the functional service to be tested may be switched from a scene corresponding to the first display image to a scene corresponding to the second display image, and when the second control simulates and triggers the first display image, the functional service to be tested may be switched from the scene corresponding to the first display image to a scene corresponding to the third display image. In the scene tree structure, the relationship between the scenes may also be corresponded, that is, the scene node 3 (the scene corresponding to the first display image) is extended to the scene node x (the scene corresponding to the second display image) through the first control, and the scene node 3 (the scene corresponding to the first display image) is extended to the scene node y (the scene corresponding to the third display image) through the second control.
Therefore, when the scene topology of the function service to be tested is determined, besides the incidence relation among the first scene information, the second scene information and the first control, the incidence relation among the first scene information, the third scene information and the second control is also determined, and the more complete and comprehensive scene topology is ensured to be determined for the function service to be tested.
In brief, the method of S301 to S304 is a process of traversing the control in the scene corresponding to the display image, and in a specific implementation, the control in the first display image may be traversed through an automatic traversal algorithm. The automated traversal algorithm may be executed based on a reinforcement learning algorithm, for example. Referring to fig. 5, which shows an overall flowchart for traversing a control through an automatic traversal algorithm according to an embodiment of the present application, as shown in fig. 5, after a terminal acquires a first display image, a corresponding scene node may be created in a scene tree for the first display image, a corresponding control instruction is generated based on an untethered control in the scene tree, and a generated control instruction is executed to switch scenes of a function service to be tested, so as to implement expansion of a scene tree structure (determination of scene topology).
Referring to fig. 6, a specific flowchart of traversal of a control through an automated traversal algorithm according to an embodiment of the present application is shown and applied to a terminal. As shown in fig. 6, in the process of determining the scene topology, the terminal may acquire, based on the scene nodes in the scene tree, the controls in the display image corresponding to the scene nodes, and determine whether the controls include an un-triggered control. If yes, the second control may be determined, for example, the second control is determined according to the priority of the non-triggered control (described in detail later), and a control instruction corresponding to the second control is generated. If not, determining whether a scene node comprising an un-triggered control exists in the scene tree structure, if so, determining that the functional service to be tested returns a control instruction of the display image corresponding to the scene node, determining a second control from the un-triggered control corresponding to the scene node based on the priority of the control, and generating a control instruction corresponding to the second control. If not, the process of traversing the control at this time can be exited.
Through an automatic traversal algorithm, the determination efficiency of the scene topology can be improved.
In addition, based on the situation that a plurality of controls are identified from the first display image in S202, in order to shorten the determination time of the scene topology and improve the determination efficiency of the scene topology, in a possible implementation manner, corresponding priorities may be set for the controls in the function service to be tested according to the control types. Wherein the priority of the controls can be used to identify the order in which the controls are traversed. Thus, in S202, the first control may be determined according to the priority of the controls in the first display image.
In the embodiment of the present application, the control types and descriptions set for the controls in the function service to be tested are shown in table 1.
TABLE 1 control types and descriptions
Control type Description of the invention
Text control (btn _ text) Control composed of characters and background pictures
Text icon control (btn _ text _ icon) Control composed of upper icon and lower text description
Return icon control (btn _ icon _ back) Controls in the form of arrow icons
Close icon control (btn _ icon _ close) Control in the form of an "x" icon
Other icon controls (btn _ icon) Controls in the form of icons other than arrows and "x" icons
Project control (btn _ item) Control composed of rectangular frame and icon
Therefore, the corresponding priority can be set for the control type, so that the control can have the corresponding priority according to the control type. The priority set for the control type in the embodiment of the present application is shown in table 2.
TABLE 2 priority corresponding to control type
Figure BDA0002237712200000141
Figure BDA0002237712200000151
For illustration, referring to fig. 7, a schematic diagram of a scene for determining a first control in a first display image according to an embodiment of the present application is shown, where, as shown in fig. 7, the first display image may be a display image of a game function service shown in fig. 7. The control elements in the first display image may have a logical relationship, where the logical relationship may be that the control element with a high logical relationship in the trigger display image may change the control element with a low logical relationship in the display image, and the control element with a low logical relationship in the trigger display image may not change the control element with a high logical relationship in the display image. In the first display image, the controls in the first level control menu are higher than the controls in the second level control menu, and the controls in the second level control menu are higher than the controls in the third level control menu. In this way, the control with the high logical relationship is generally the type of the text icon control, and the priority corresponding to the prior traversal can be set for the control, and the control with the low logical relationship is generally the type of the text control, and the priority corresponding to the later traversal can be set for the control.
By the method, the terminal can traverse the control in the first display image of the functional service to be tested based on the priority corresponding to the control type, so that the traversing efficiency is improved, and the determination time of the scene topology is shortened.
In this embodiment of the application, in order to improve efficiency and accuracy of identifying the control in the first display image in S202, in one possible implementation manner, the method for identifying the control in the first display image in S202 may include:
s401: a control in the first display image is identified by a control identification model.
In the embodiment of the application, a control identification model can be trained in advance, and the control equipment model is deployed in the terminal. Wherein the control recognition model may include at least a neural network sub-model. The neural network submodel may implement the following functions: the controls included therein are identified from the input image. In a particular implementation, the Neural network submodel may be a fast Region-Convolutional Neural network (fast R-CNN) model.
In an actual scene, the content in the display image of the functional service to be tested is usually complex, for example: the displayed image of the game to-be-tested function service comprises various special effects and elements, so that the difficulty of control identification is increased. In order to improve the control recognition rate and accuracy of the neural network submodel, the embodiment of the present application provides three ways that can obtain more sufficient and diversified training samples (data enhancement), which are respectively: a new training sample can be generated by combining different controls with the graph; cutting out an interested area according to the control position in the historical display image to generate a new training sample; and training samples generated according to the invariant regions extracted from the at least two history display images.
In a specific implementation, the manner for extracting the invariant region from two or more history display images may be implemented by a Local Feature extraction (DELF) model. The DELF model can calculate the invariant region of the display interface based on a mask (mask) by detecting the invariant features in two or more historical display images, so as to obtain the control region. Referring to fig. 8, which shows an effect display diagram for obtaining invariant regions for two history display images according to an embodiment of the present application, as shown in fig. 8, an invariant region (outlined by a gray line frame) including a control may be extracted from two history images.
Based on this, when training the neural network submodel, any one or more of the training samples generated in the above three ways may be used for training. Through the three data enhancement modes, a more sufficient training sample can be provided for the neural network submodel, so that the control identification of the neural network submodel can achieve a relatively ideal effect. Meanwhile, the robustness of the neural network submodel can be improved by increasing the diversity of the training samples.
Based on the situation that the control of the functional service display image to be tested usually includes text information, in order to improve the control recognition rate of the control recognition model in S401, in a possible implementation manner, the control model may further include a text recognition sub-model. The character recognition submodel can realize the following functions: and identifying a control comprising text information according to the input image and determining the type of the control. As such, for the method of identifying a control in the first display image through the control identification model in S401, the method may include:
s501: and determining the number of controls and the position area of the controls in the first display image through the neural network submodel.
S502: and identifying the name of the control through a character identification sub-model according to the position area of the control.
In a particular implementation, the text Recognition submodel may be an Optical Character Recognition (OCR) model. The history display image of the functional service can be used as a training sample of the character recognition submodel. In addition, the control image areas recognized by the character recognition submodel from the input historical display images can be recombined to generate a new image as a training sample of the character recognition submodel. Therefore, the recognition efficiency and accuracy of the character recognition submodel are improved by increasing the number of the training samples and improving the diversity of the training samples.
Referring to fig. 9a, the diagram shows an image schematic diagram output after being recognized by the text recognition sub-model according to the embodiment of the present application, as shown in fig. 9a, in the image, the text recognition sub-model recognizes controls such as "one-key extraction", "delete read" and the like and corresponding control types. Therefore, the control area images recognized from the image combination can be combined to obtain a new image which is used as a training sample of the character recognition sub-model. Referring to fig. 9b, which shows a schematic diagram of a training sample image obtained after combining based on control image regions according to an embodiment of the present application, when the image shown in fig. 9b is input to a character recognition sub-model, an output result shown in fig. 9c may be obtained. Referring to fig. 9c, the graph shows a schematic diagram of an output result corresponding to a control image region combination training sample provided in the embodiment of the present application, where the output result displays text content recognized from the image of fig. 9b and corresponding accuracy, and the like.
Therefore, the character recognition sub-model is added into the control recognition model to recognize the control, so that the control recognition rate and accuracy are improved, and the control type and other related information corresponding to the control can be determined based on the character content of the control in the recognition result.
In the embodiment of the application, the terminal can classify scenes in the functional service to be tested to obtain corresponding scene categories. For example, for a functional service of a game, 8 scene categories may be set for the functional service, which are: an in-copy battle scene category, a load scene category, a login and role selection scene category, a full screen User Interface (UI) menu scene category, a pop-up game screen scene category, a pop-up menu scene category, an in-town battle scene category, an in-town normal play scene category.
In this way, when determining the scene topology for the functional service to be tested, the terminal may add the scene type corresponding to the display image (such as the first display image, the second display image, and the like) to the scene information corresponding to the display image, so that the scene belonging to the same scene type may be determined based on the scene information in the scene topology in the following. Furthermore, based on the topological relation of the same scene type in the scene topology, the automatic control of the functional service for a certain scene type is conveniently carried out, so that the directional automatic coverage of the scene topology is realized.
The mode of the terminal in the scene category corresponding to the display image serving for the function to be tested may be that a scene recognition model is trained in advance, so that the function of determining the scene category corresponding to the input display image can be realized, and the scene recognition model is deployed in the terminal. Accordingly, the scene type corresponding to the display image is determined by the scene recognition model. In a particular implementation, the scene recognition model may be a CNN model.
Wherein the training sample of the scene recognition model may be a display image sample including a scene category label. In an actual scene, the number of display image samples in different scene classes in a training sample may be unbalanced, for example: in the functional service of the game, the number of the display image samples of the login and role selection scene category in the training sample is less than that of the display image samples of other scene categories (such as the intra-copy battle scene category).
Based on this, when determining the training samples of the scene recognition model, it should be ensured that the number difference between the display image samples having the same scene category label is smaller than the preset value as much as possible, wherein the display image samples having the same scene category label may be understood as display image samples belonging to the same scene category, and the preset value may be used to identify that the number of the display image samples of different scene categories in the training samples is close.
In a specific implementation, the manner of ensuring that the number of display image samples of different scene categories in the training samples is close may be that, for a scene category with a smaller number of display image samples, the number of display image samples of the scene category may be increased in a manner of synthesizing images.
By the mode of balancing the number of the display image samples of different scene categories, the trained scene recognition model can be ensured to have higher scene recognition accuracy for the images of each scene category.
In addition, a sequence model for identifying scene types of a plurality of display images may be trained, and thus, a process of scene switching of a functional service may be identified based on a time-series relationship of the plurality of display images. The sequence model for identifying the multiple display image scene categories may be, for example, a Recurrent Neural Network (RNN) model, a Long Short-Term Memory (LSTM) model (a deformation model of RNN), or the like.
It can be understood that the scene topology determined in the embodiment of the present application is based on the control to implement the scene switching, and thus, it should be ensured that the acquired first display image has the control.
Based on this, in one possible implementation, the method may further include:
s601: and judging whether the first display image is the specified type image without the control, if not, executing S602.
S602: and executing a step of determining first scene information corresponding to a first display image according to the first display image of the function service to be tested.
Wherein the specified type of image may be a pre-specified type of image that does not include a control. For example: in the function service of the game, the specified type image may be a built-in web page, a transmission screen, a screen including a pop-up scene that appears when some item or button is clicked in the game, or the like. The built-in web pages are marketing activity pages, prompt pages or questionnaire pages and the like; the transfer screen may be a screen displayed when the bar is read.
When the terminal determines that the first display image is the specified type image through judgment, the terminal can generate a corresponding control instruction which indicates to return to the display interface including the control through the to-be-tested function service through the specified type image to realize that the to-be-tested function service returns to the display picture including the control.
The method for the terminal to determine whether the first display image is the built-in webpage or not for the function service of the game comprises the steps of obtaining and inquiring the process of the terminal with the function service to be detected, and determining that the first display image belongs to the built-in webpage if the terminal is determined to be added with the process related to the built-in webpage through inquiry. And sending a corresponding control instruction to the terminal to quit the built-in webpage and return to a normal game picture.
The method for determining whether the first display image is the transmission image by the terminal comprises the steps of collecting all transmission images in a game template library as templates, determining whether the first display image is the transmission image in an image template matching mode, and if so, waiting for the end of game bar reading and returning to a normal game image.
The terminal determines whether the first display image is a picture including a pop-up window scene by acquiring a previous display image of the first display image, generating a control instruction indicating that a blank area is clicked in the display image through a game, and acquiring a next image of the first display image when the blank area is simulated to be clicked. It can be understood that, if the first display image is a frame including a pop-up scene, the previous display image should not include a display image of the pop-up scene; a display image of the first display image which should include the pop-up scene; the subsequent display image should not include the display image of the pop-up window scene, and thus, it is possible to determine whether the first display image is a picture including the pop-up window scene by performing similarity calculation between two images with respect to the three display images. Meanwhile, the currently displayed image (the later display image of the first display image), the return of the game to the normal game screen has been achieved.
When it is determined by the judgment that the first display image is not the designated type image, the terminal may perform the step of S602, that is, the step of determining the first scene information corresponding to the first display image according to the first display image of the function service to be tested in S201.
By the method, the display image without the control in the functional service to be tested is eliminated, the functional service to be tested returns to the display picture with the control, and the continuous determination process of the scene topology is ensured.
In this embodiment of the present application, when determining a scene topology for a to-be-tested function service, in order to avoid the scene topology including repeated scenes, in a possible implementation manner, the method further includes:
s701: and judging whether the scene corresponding to the first display image is in the scene topology or not according to the first scene information, and if not, executing S702.
S702: the step of identifying a control in the first display image is performed.
It can be understood that the scene topology includes a topological relationship between scenes obtained by analyzing the display image of the functional service to be tested. Therefore, after the corresponding first scene information is determined for the acquired first display image, since the first scene information identifies the scene embodied in the displayed content in the first display image, it can be determined whether the scene corresponding to the first display image is already in the scene topology according to the first scene information, and if it is determined that the scene corresponding to the first display image is not included in the scene topology, the terminal can execute the step of S202, that is, identifying the control in the first display image.
The following illustrates the methods of S701 to S702, based on the foregoing example of fig. 3, referring to fig. 10, which shows a flowchart of a method for creating a scene node in a scene tree according to an embodiment of the present application, as shown in fig. 10, before creating a corresponding scene node for a first display image, a terminal first needs to determine whether a scene corresponding to the first display image is a scene in the scene tree, that is, whether a scene corresponding to the first display image has been visited, if not, create the scene node, and if so, not, create the corresponding scene node for the first display image. When the scene node is created for the first display image, the corresponding priority can be set for the control in the first display image based on the control type, and the position coordinate corresponding to the control can be obtained. The position coordinates corresponding to the control can identify the position of the control in the display image.
It should be noted that the scenes involved in the embodiments of the present application are different from the aforementioned categories of scenes, and a category of scenes is a summary of one type of scene, and a category of scenes may include multiple scenes. For example, for the intra-town battle scene category, the scene under the scene category may be a battle scene of a character at a intra-town store a or a battle scene of a character at a intra-town store B, which belong to the intra-town battle scene category.
Therefore, by the method, the scene topology determined for the function service to be tested does not include repeated scenes, and the scene topology is guaranteed to have a simplified structure.
In the embodiment of the application, after determining the scene topology for the functional service to be tested, in a possible implementation manner, the terminal may further evaluate the determined scene topology. Based on this, the method for determining the scene topology may further include:
s801: and after determining the scene topology of the functional service to be tested, acquiring a scene identification record, and generating a control trigger record according to a control instruction.
The scene identification record may include scene data identified in the process of determining the scene topology for the to-be-tested function service, and the control trigger record may include control data triggered based on the control instruction in the process of determining the scene topology for the to-be-tested function service.
In particular implementations, both the scene recognition record and the control trigger record can be saved in log data.
S802: and determining index parameters according to the scene identification record and the control trigger record.
In an actual scenario, the index parameters may include: scene coverage (i.e. the number of scene coverage in the current scene topology determination process); control coverage rate (i.e. the number of controls covered in the process of determining the scene topology at this time); automation efficiency (namely the number of scenes covered in a unit time and the number of controls in the scene topology determination process); a valid trigger number; the influence of model (scene recognition model and control recognition model) recognition accuracy on scene coverage; the influence of model (scene recognition model and control recognition model) recognition accuracy on control coverage; the influence of model (scene recognition model and control recognition model) recognition accuracy on automation efficiency; influence of an automatic traversal algorithm on scene coverage; influence of an automatic traversal algorithm on the control coverage rate; the impact of the automation traversal algorithm on automation efficiency.
The terminal can calculate the index parameters according to the scene identification record and the control trigger record.
In a specific implementation, referring to fig. 11, which shows a schematic flow chart of determining an index parameter according to a scene identification record and a control trigger record, as shown in fig. 11, a terminal may read one scene data in log data and corresponding control trigger data each time, determine a corresponding scene node by obtaining data such as a display image, a scene ID, a scene category, and the like of a scene corresponding to the scene data, and perform recording on the index parameter related to the scene. In addition, the index parameters related to the control are recorded by acquiring data triggered by the control (such as position coordinates of the control, a control mode corresponding to the control, and the like). And continuously reading log data to determine the index parameters for the scene topology.
S803: and determining an evaluation index of the scene topology according to the index parameter.
The evaluation index can be used for identifying an evaluation result of the scene topology. Therefore, the evaluation index of the scene topology can be determined based on the determined index parameter.
By the method, quantifiable evaluation can be realized for the determined scene topology, and evaluation indexes can be repeated and unchanged and have limited convergence.
Next, a scene topology determining method provided in the embodiment of the present application will be described with reference to an actual application scene.
Referring to fig. 12a, the figure shows an overall system architecture diagram of a method for performing scene topology determination according to an embodiment of the present application. As shown in fig. 12a, a terminal therein is deployed with a service to be tested, and a server is deployed with an Artificial Intelligence (AI) module including a scene recognition model and a control recognition model. And in the process of running the function service to be tested by the terminal, the server determines the scene topology of the function service to be tested.
Referring to fig. 12b, which shows a schematic flow chart of a method for determining a scene topology according to an embodiment of the present application, as shown in fig. 12b, when determining a scene topology for a functional service to be tested, a server may obtain a first display image of the functional service to be tested from a terminal, automatically identify scene information corresponding to a first display image and control information included in the first display image through an artificial intelligence module including a scene identification model and a control identification model, and traverse a control in the first display image, so as to obtain the scene topology of the functional service to be tested. The scene topology may be stored in a form of a scene tree structure. In generating the scene tree, scene nodes included in the scene tree may be created through recognition of the first display image.
Therefore, the technical scheme provided by the embodiment of the application can realize the classification of scenes in the functional service, the identification of controls in the scenes and the simulation control of the controls on the premise of influencing the functional service to be tested. The scheme is a basic capability in various automation tasks based on the fact that the current automation operation process comprises the simulation operation of a control.
Based on the artificial intelligence based scene topology determining method provided by the foregoing embodiment, an embodiment of the present application provides an artificial intelligence based scene topology determining device, see fig. 13a, which shows a structure diagram of an artificial intelligence based scene topology determining device provided by an embodiment of the present application, where the device includes a determining unit 1301, an identifying unit 1302, and an obtaining unit 1303:
the determining unit 1301 is configured to determine, according to a first display image of a function service to be detected, first scene information corresponding to the first display image;
the identifying unit 1302 is configured to identify a control in the first display image, where the control is a controllable module; the identified controls include a first control;
the obtaining unit 1303 is configured to obtain a second display image according to the control instruction generated by the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
the determining unit 1301 is configured to determine second scene information corresponding to the second display image;
the determining unit 1301 is configured to determine a scene topology of the functional service to be tested according to an association relationship among the first scene information, the second scene information, and the first control.
Optionally, referring to fig. 13b, this figure shows a structure diagram of an artificial intelligence based scene topology determination apparatus provided in an embodiment of the present application, where the apparatus further includes a determining unit 1304:
the determining unit 1304 is configured to determine, after the identifying the control in the first display image, whether an un-triggered control is included in the first display image;
the obtaining unit 1303 is configured to obtain a third display image according to a control instruction generated by the second control if it is determined that the second control is an un-triggered control; the control instruction generated by the second control is used for indicating that the second control is triggered by the functional service to be tested;
the determining unit 1301 is configured to determine third scene information corresponding to the third display image;
the determining unit 1301 is configured to determine a scene topology of the functional service to be tested according to an association relationship among the first scene information, the third scene information, and the second control.
Optionally, the identifying unit 1302 is specifically configured to:
identifying a control in the first display image through a control identification model;
the control identification model at least comprises a neural network submodel, and training samples for training the neural network submodel comprise one or more of the following training samples:
combining training samples generated by different controls and images;
cutting out a training sample generated by the region of interest according to the position of the control in the historical display image;
and training samples are generated according to the invariant regions extracted from at least two history display images.
Optionally, the identifying unit 1302 is further specifically configured to:
the control identification model also comprises a character identification submodel; determining the number of controls and a position area of the controls in the first display image through the neural network submodel;
and identifying the name of the control through the character recognition sub-model according to the position area of the control.
Optionally, the identifying unit 1302 is further specifically configured to:
the scene information includes a scene type, which is the first display image or the second display image of the display image, and the corresponding scene type is determined by:
determining a scene category corresponding to the display image through a scene recognition model; the scene recognition model is obtained by training display image samples comprising scene category labels, wherein the quantity difference between the display image samples with the same scene category label is smaller than a preset value.
Optionally, the control has a priority set according to the control type, and the first control is determined according to the priority.
Optionally, the determining unit 1304 is further specifically configured to:
judging whether the first display image is a specified type image without a control;
and if not, executing the step of determining first scene information corresponding to the first display image according to the first display image of the functional service to be tested.
Optionally, the determining unit 1304 is further specifically configured to:
judging whether a scene corresponding to the first display image is in the scene topology or not according to the first scene information;
and if not, executing the step of identifying the control in the first display image.
Optionally, referring to fig. 13c, this figure shows a structure diagram of an artificial intelligence based scene topology determination apparatus provided in an embodiment of the present application, where the apparatus further includes an evaluation unit 1305, where the evaluation unit 1305 is configured to:
after determining the scene topology of the functional service to be tested, acquiring a scene identification record and generating a control trigger record according to a control instruction;
determining index parameters according to the scene identification record and the control trigger record;
and determining an evaluation index of the scene topology according to the index parameter.
According to the technical scheme, the scene topology is determined by adopting a mode of analyzing the display image of the functional service to be tested in real time. According to the function service to be tested, which needs to determine scene topology, according to a first display image of the function service to be tested, first scene information corresponding to the first display image is determined, and a control in the first display image is identified. And generating a corresponding control instruction aiming at the identified first control to indicate that the first control is triggered through the functional service to be tested, so as to obtain a second display image generated in the functional service to be tested based on the trigger, and after determining the scene information of the second display image, determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control. The display image of the functional service to be tested is obtained in real time, so that the normal work of the functional service to be tested is not influenced like the related technology, the parameters do not need to be reconfigured for different functional services to be tested, and the adaptability and the compatibility are strong.
The embodiment of the present application further provides a device for determining scene topology based on artificial intelligence, which is described below with reference to the accompanying drawings. Referring to fig. 14, an embodiment of the present application provides a scene topology determining device 1400 based on artificial intelligence, where the device 1400 may also be a terminal device, and the terminal device may be any intelligent terminal including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA for short), a Point of Sales (POS for short), a vehicle-mounted computer, and the terminal device is taken as a mobile phone as an example:
fig. 14 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 14, the handset includes: radio Frequency (RF) circuit 1410, memory 1420, input unit 1430, display unit 1440, sensor 1450, audio circuit 1460, wireless fidelity (WiFi) module 1470, processor 1480, and power supply 1490. Those skilled in the art will appreciate that the handset configuration shown in fig. 14 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 14:
RF circuit 1410 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for processing received downlink information of a base station to processor 1480; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1410 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1410 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 1420 may be used to store software programs and modules, and the processor 1480 executes various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1420. The memory 1420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, memory 1420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. In particular, the input unit 1430 may include a touch panel 1431 and other input devices 1432. The touch panel 1431, also referred to as a touch screen, may collect touch operations performed by a user on or near the touch panel 1431 (for example, operations performed by the user on or near the touch panel 1431 by using any suitable object or accessory such as a finger or a stylus pen), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1431 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device and converts it to touch point coordinates, which are provided to the processor 1480 and can receive and execute commands from the processor 1480. In addition, the touch panel 1431 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1431, the input unit 1430 may also include other input devices 1432. In particular, other input devices 1432 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1440 may be used to display information input by or provided to the user and various menus of the mobile phone. The Display unit 1440 may include a Display panel 1441, and optionally, the Display panel 1441 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, touch panel 1431 can overlay display panel 1441, and when touch panel 1431 detects a touch operation on or near touch panel 1431, it can transmit to processor 1480 to determine the type of touch event, and then processor 1480 can provide a corresponding visual output on display panel 1441 according to the type of touch event. Although in fig. 14, the touch panel 1431 and the display panel 1441 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1431 and the display panel 1441 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1450, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1441 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 1441 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1460, speaker 1461, microphone 1462 may provide an audio interface between a user and a cell phone. The audio circuit 1460 can transmit the received electrical signal converted from the audio data to the loudspeaker 1461, and the electrical signal is converted into a sound signal by the loudspeaker 1461 and output; on the other hand, the microphone 1462 converts collected sound signals into electrical signals, which are received by the audio circuit 1460 and converted into audio data, which are then processed by the audio data output processor 1480, and then passed through the RF circuit 1410 for transmission to, for example, another cellular phone, or for output to the memory 1420 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a WiFi module 1470, and provides wireless broadband internet access for the user. Although fig. 14 shows the WiFi module 1470, it is understood that it does not belong to the essential constitution of the handset and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1480, which is the control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1420 and calling data stored in the memory 1420, thereby integrally monitoring the mobile phone. Alternatively, the processor 1480 may include one or more processing units; preferably, the processor 1480 may integrate an application processor, which handles primarily operating systems, user interfaces, and applications, among others, with a modem processor, which handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1480.
The handset also includes a power supply 1490 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1480 via a power management system to provide management of charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment, the processor 1480 included in the terminal device also has the following functions:
determining first scene information corresponding to a first display image according to the first display image of a function service to be detected;
identifying a control in the first display image, wherein the control is a controllable module; the identified controls include a first control;
acquiring a second display image through a control instruction generated according to the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
determining second scene information corresponding to the second display image;
and determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control.
The apparatus for determining scene topology based on artificial intelligence according to the embodiment of the present application may be a server, please refer to fig. 15, where fig. 15 is a structural diagram of the server 1500 according to the embodiment of the present application, and the server 1500 may generate relatively large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1522 (e.g., one or more processors) and a memory 1532, and one or more storage media 1530 (e.g., one or more mass storage devices) for storing an application program 1542 or data 1544. Memory 1532 and storage media 1530 may be, among other things, transient or persistent storage. The program stored on the storage medium 1530 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 1522 may be provided in communication with the storage medium 1530, executing a series of instruction operations in the storage medium 1530 on the server 1500.
The server 1500 may also include one or more power supplies 1526, one or more wired or wireless network interfaces 1550, one or more input-output interfaces 1558, and/or one or more operating systems 1541, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 15.
The CPU1522 is configured to execute the following steps:
determining first scene information corresponding to a first display image according to the first display image of a function service to be detected;
identifying a control in the first display image, wherein the control is a controllable module; the identified controls include a first control;
acquiring a second display image through a control instruction generated according to the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
determining second scene information corresponding to the second display image;
and determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium may be at least one of the following media: various media that can store program codes, such as read-only memory (ROM), RAM, magnetic disk, or optical disk.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A scene topology determination method based on artificial intelligence is characterized by comprising the following steps:
determining first scene information corresponding to a first display image according to the first display image of a function service to be detected;
identifying a control in the first display image through a control identification model, wherein the control is a controllable module; the identified controls include a first control;
acquiring a second display image through a control instruction generated according to the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
determining second scene information corresponding to the second display image;
determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the second scene information and the first control;
the control identification model at least comprises a neural network submodel, and the training samples for training the neural network submodel comprise at least one of the following training samples:
cutting out a training sample generated by the region of interest according to the position of the control in the historical display image;
and detecting invariance characteristics in at least two history display images through a local characteristic extraction DELF model, calculating an invariant region of a display interface based on a mask, and generating a training sample according to the invariant region.
2. The method of claim 1, wherein after the identifying the control in the first display image, the method further comprises:
judging whether the first display image comprises an un-triggered control or not;
if the second control is determined to be the non-triggered control, acquiring a third display image according to a control instruction generated by the second control; the control instruction generated by the second control is used for indicating that the second control is triggered by the functional service to be tested;
determining third scene information corresponding to the third display image;
and determining the scene topology of the functional service to be tested according to the incidence relation among the first scene information, the third scene information and the second control.
3. The method of claim 1, wherein the control recognition model further comprises a text recognition submodel; the identifying, by a control identification model, a control in the first display image includes:
determining the number of controls and a position area of the controls in the first display image through the neural network submodel;
and identifying the name of the control through the character recognition sub-model according to the position area of the control.
4. The method according to claim 1, wherein the scene information comprises a scene category, and the first display image or the second display image is a display image, and the corresponding scene category is determined by:
determining a scene category corresponding to the display image through a scene recognition model; the scene recognition model is obtained by training display image samples comprising scene category labels, wherein the quantity difference between the display image samples with the same scene category label is smaller than a preset value.
5. The method of claim 1, wherein the controls have a priority set according to a control type, and the first control is determined according to the priority.
6. The method of claim 1, further comprising:
judging whether the first display image is a specified type image without a control;
and if not, executing the step of determining first scene information corresponding to the first display image according to the first display image of the functional service to be tested.
7. The method of claim 1, further comprising:
judging whether a scene corresponding to the first display image is in the scene topology or not according to the first scene information;
and if not, executing the step of identifying the control in the first display image.
8. The method according to any one of claims 1-7, further comprising:
after determining the scene topology of the functional service to be tested, acquiring a scene identification record and generating a control trigger record according to a control instruction;
determining index parameters according to the scene identification record and the control trigger record;
and determining an evaluation index of the scene topology according to the index parameter.
9. The method of claim 1, wherein training the training samples of the neural network submodel further comprises:
training samples generated by combining different controls and images.
10. An artificial intelligence based scene topology determination device, characterized in that the device comprises a determination unit, a recognition unit and an acquisition unit:
the determining unit is used for determining first scene information corresponding to a first display image according to the first display image of the functional service to be detected;
the identification unit is used for identifying a control in the first display image, and the control is a controllable module; the identified controls include a first control;
the acquisition unit is used for acquiring a second display image according to the control instruction generated by the first control; the control instruction generated by the first control is used for indicating that the first control is triggered through the functional service to be tested;
the determining unit is used for determining second scene information corresponding to the second display image;
the determining unit is configured to determine a scene topology of the functional service to be tested according to an association relationship among the first scene information, the second scene information, and the first control;
the identification unit is specifically configured to:
identifying a control in the first display image through a control identification model;
the control identification model at least comprises a neural network submodel, and the training samples for training the neural network submodel comprise at least one of the following training samples:
cutting out a training sample generated by the region of interest according to the position of the control in the historical display image;
and detecting invariance characteristics in at least two history display images through a local characteristic extraction DELF model, calculating an invariant region of a display interface based on a mask, and generating a training sample according to the invariant region.
11. The apparatus according to claim 10, further comprising a judging unit:
the judging unit is used for judging whether the first display image comprises an un-triggered control or not after the control in the first display image is identified;
the obtaining unit is used for obtaining a third display image through a control instruction generated according to the second control if the second control is determined to be the non-triggered control; the control instruction generated by the second control is used for indicating that the second control is triggered by the functional service to be tested;
the determining unit is used for determining third scene information corresponding to the third display image;
the determining unit is configured to determine a scene topology of the functional service to be tested according to an association relationship among the first scene information, the third scene information, and the second control.
12. The apparatus according to claim 10, wherein the identification unit is further specifically configured to:
the control identification model also comprises a character identification submodel; determining the number of controls and a position area of the controls in the first display image through the neural network submodel;
and identifying the name of the control through the character recognition sub-model according to the position area of the control.
13. The apparatus of claim 10, wherein training the training samples of the neural network submodel further comprises:
training samples generated by combining different controls and images.
14. An apparatus for artificial intelligence based scene topology determination, the apparatus comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the artificial intelligence based scene topology determination method of any of claims 1-9 according to instructions in the program code.
15. A computer-readable storage medium for storing program code for performing the artificial intelligence based scene topology determination method of any of claims 1-9.
CN201910989250.5A 2019-10-17 2019-10-17 Scene topology determination method and device based on artificial intelligence Active CN110750193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910989250.5A CN110750193B (en) 2019-10-17 2019-10-17 Scene topology determination method and device based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910989250.5A CN110750193B (en) 2019-10-17 2019-10-17 Scene topology determination method and device based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN110750193A CN110750193A (en) 2020-02-04
CN110750193B true CN110750193B (en) 2022-01-14

Family

ID=69278727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910989250.5A Active CN110750193B (en) 2019-10-17 2019-10-17 Scene topology determination method and device based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN110750193B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162672A (en) * 2020-10-19 2021-01-01 腾讯科技(深圳)有限公司 Information flow display processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248190A (en) * 2017-05-11 2017-10-13 珠海金山网络游戏科技有限公司 The scene graph design method and system of a kind of three-dimensional game engine
WO2018179532A1 (en) * 2017-03-27 2018-10-04 Mitsubishi Electric Corporation System and method for representing point cloud of scene
CN109685746A (en) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 Brightness of image method of adjustment, device, storage medium and terminal
CN109857668A (en) * 2019-02-03 2019-06-07 苏州市龙测智能科技有限公司 UI automated function test method, test device, test equipment and storage medium
CN110287111A (en) * 2019-06-21 2019-09-27 深圳前海微众银行股份有限公司 A kind of method for generating test case and device of user interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018179532A1 (en) * 2017-03-27 2018-10-04 Mitsubishi Electric Corporation System and method for representing point cloud of scene
CN107248190A (en) * 2017-05-11 2017-10-13 珠海金山网络游戏科技有限公司 The scene graph design method and system of a kind of three-dimensional game engine
CN109685746A (en) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 Brightness of image method of adjustment, device, storage medium and terminal
CN109857668A (en) * 2019-02-03 2019-06-07 苏州市龙测智能科技有限公司 UI automated function test method, test device, test equipment and storage medium
CN110287111A (en) * 2019-06-21 2019-09-27 深圳前海微众银行股份有限公司 A kind of method for generating test case and device of user interface

Also Published As

Publication number Publication date
CN110750193A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
CN111556278B (en) Video processing method, video display device and storage medium
CN110704661B (en) Image classification method and device
CN109325967A (en) Method for tracking target, device, medium and equipment
CN105867751B (en) Operation information processing method and device
CN111582116B (en) Video erasing trace detection method, device, equipment and storage medium
CN110738211A (en) object detection method, related device and equipment
CN110766081B (en) Interface image detection method, model training method and related device
CN111209423B (en) Image management method and device based on electronic album and storage medium
CN109947650B (en) Script step processing method, device and system
CN110457214B (en) Application testing method and device and electronic equipment
CN109495616B (en) Photographing method and terminal equipment
CN109993234B (en) Unmanned driving training data classification method and device and electronic equipment
CN112203115B (en) Video identification method and related device
CN113050860B (en) Control identification method and related device
CN114357278B (en) Topic recommendation method, device and equipment
CN115588131B (en) Model robustness detection method, related device and storage medium
CN113284142A (en) Image detection method, image detection device, computer-readable storage medium and computer equipment
CN113723159A (en) Scene recognition model training method, scene recognition method and model training device
CN109241079A (en) Method, mobile terminal and the computer storage medium of problem precise search
CN110544287A (en) Picture matching processing method and electronic equipment
CN110750193B (en) Scene topology determination method and device based on artificial intelligence
CN113010825A (en) Data processing method and related device
CN108319412A (en) A kind of photo delet method, mobile terminal and computer readable storage medium
CN112270238A (en) Video content identification method and related device
CN111611369A (en) Interactive method based on artificial intelligence and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020893

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant