CN114168429A - Error reporting analysis method and device, computer equipment and storage medium - Google Patents

Error reporting analysis method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114168429A
CN114168429A CN202111555261.6A CN202111555261A CN114168429A CN 114168429 A CN114168429 A CN 114168429A CN 202111555261 A CN202111555261 A CN 202111555261A CN 114168429 A CN114168429 A CN 114168429A
Authority
CN
China
Prior art keywords
test
error
error reporting
instruction
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111555261.6A
Other languages
Chinese (zh)
Inventor
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pingan Payment Technology Service Co Ltd
Original Assignee
Pingan Payment Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pingan Payment Technology Service Co Ltd filed Critical Pingan Payment Technology Service Co Ltd
Priority to CN202111555261.6A priority Critical patent/CN114168429A/en
Publication of CN114168429A publication Critical patent/CN114168429A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Abstract

The application discloses an error reporting analysis method, an error reporting analysis device, computer equipment and a storage medium, wherein the method comprises the following steps: monitoring error reporting information sent by any access terminal according to a preset monitoring terminal; extracting user behavior data of a target terminal corresponding to the error reporting information, wherein the target terminal is an access terminal for sending the error reporting information, and the user behavior data is a plurality of user instructions generated by user operation during generation of the error reporting information; acquiring a terminal operating environment of the target terminal to generate an environment image file, and constructing a plurality of test containers according to the image file; generating a plurality of test cases according to the user behavior data, and respectively inputting the test cases into the test containers, wherein the number of the test cases is the same as that of the test containers; and generating an error reporting analysis result aiming at the error reporting information based on the test result of each test case run by the plurality of test containers.

Description

Error reporting analysis method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to an error reporting analysis method and device, computer equipment and a storage medium.
Background
Distributed systems (distributed systems) are software systems built on top of a network. It is the nature of software that the distributed system is highly cohesive and transparent. Thus, the distinction between a network and a distributed system is more in the high-level software, specifically the operating system, than in the hardware. In a distributed system, task execution error reporting occurs when each access terminal executes a task, and such execution error reporting is generally caused by misoperation when a user of the access terminal is operating.
The inventor of the invention finds in research that in the prior art, when an access terminal has an execution error report, the access terminal cannot analyze error behaviors of the report error information in time, but only stores an operation error in an execution log and sends the execution task log to a terminal where an administrator is located, and the administrator performs manual error troubleshooting based on experience, so that the troubleshooting efficiency is extremely low.
Disclosure of Invention
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for efficiently checking the execution error of an access terminal.
In order to solve the above technical problem, the embodiment of the present invention adopts a technical solution that: provided is an error analysis method, including:
monitoring error reporting information sent by any access terminal according to a preset monitoring terminal;
extracting user behavior data of a target terminal corresponding to the error reporting information, wherein the target terminal is an access terminal for sending the error reporting information, and the user behavior data is a plurality of user instructions generated by user operation during generation of the error reporting information;
acquiring a terminal operating environment of the target terminal to generate an environment image file, and constructing a plurality of test containers according to the image file, wherein the plurality of test containers are all used for simulating the operating environment of the target terminal;
generating a plurality of test cases according to the user behavior data, and inputting the test cases into the test containers respectively, wherein the number of the test cases is the same as that of the test containers, and differences exist among the test cases;
and generating an error reporting analysis result aiming at the error reporting information based on the test result of each test case run by the plurality of test containers.
Optionally, before monitoring error reporting information sent by any access terminal according to a preset monitoring terminal, the method includes:
the access terminal executes a randomly generated delay task, and when the delay task of any access terminal is achieved, the access terminal sends a referral request to other access terminals;
after the arbitrary access terminal sends the referral request, the referral reply based on the referral request reply of other access terminals is received;
and counting the number of the referral replies, and when the number of the referral replies is greater than a preset threshold value, electing any access terminal as a monitoring terminal.
Optionally, the monitoring error information sent by any access terminal according to a preset monitoring terminal includes:
embedding points in a task thread of each access terminal to generate a monitoring tool;
when the monitoring tool monitors that any task thread executes an error report, reading a plurality of user instructions in a generating time period of the task thread;
and writing the plurality of task instructions into a DEL file according to a generation time sequence to generate the error reporting information, and sending the error reporting information to the monitoring terminal.
Optionally, the acquiring the terminal operating environment of the target terminal to generate an environment image file, and constructing a plurality of test containers according to the image file includes:
acquiring an operating environment of the target terminal, wherein the operating environment comprises a system type and system configuration parameters of the target terminal;
calling a corresponding system file according to the system type, and configuring the system file according to the system configuration parameters to generate the image file;
and activating a plurality of containers according to the image file, sending the image file to the plurality of containers for test environment configuration, and constructing the plurality of test containers.
Optionally, the generating a plurality of test cases according to the user behavior data, and inputting the plurality of test cases into the plurality of test containers respectively includes:
arranging user instructions in the user behavior data according to a time sequence to generate an instruction time sequence;
generating an instruction topological graph according to the instruction timing sequence, wherein the instruction topological graph comprises a plurality of instruction nodes, and each instruction node corresponds to one user instruction;
replacing any one instruction node in the instruction topological graph according to a preset replacement strategy to generate a plurality of correction topological graphs;
and generating a plurality of test cases according to the instruction logics of the plurality of corrected topological graphs, and respectively inputting the plurality of test cases into the plurality of test containers.
Optionally, the generating an error report analysis result for the error report information based on the test result of each test case run by the plurality of test containers includes:
collecting test results output by the plurality of test containers;
selecting a test result which runs correctly in the test results as a target test result;
comparing and analyzing the user instruction represented by the correction topological graph corresponding to the target test result with the user behavior data, and determining an error instruction in the user behavior data;
and generating an error reporting analysis result aiming at the error reporting information according to the error instruction.
Optionally, after generating an error report analysis result for the error report information based on the test result of each test case run by the plurality of test containers, the method includes:
sending the error reporting analysis result to the target terminal;
and the target terminal prompts an error instruction of a user according to the error reporting analysis result and stores the error reporting information and the error reporting analysis result in a correlation manner.
To solve the above technical problem, an embodiment of the present invention further provides an error reporting analysis apparatus, including:
the monitoring module is used for monitoring error reporting information sent by any access terminal according to a preset monitoring terminal;
the extraction module is used for extracting user behavior data of a target terminal corresponding to the error reporting information, wherein the target terminal is an access terminal for sending the error reporting information, and the user behavior data is a plurality of user instructions generated by user operation during generation of the error reporting information;
the acquisition module is used for acquiring a terminal operating environment of the target terminal to generate an environment image file and constructing a plurality of test containers according to the image file, wherein the plurality of test containers are all used for simulating the operating environment of the target terminal;
the processing module is used for generating a plurality of test cases according to the user behavior data and inputting the test cases into the test containers respectively, wherein the number of the test cases is the same as that of the test containers, and differences exist among the test cases;
and the execution module is used for generating an error reporting analysis result aiming at the error reporting information based on the test result of each test case run by the plurality of test containers.
Optionally, the error analysis apparatus further includes:
the first generation submodule is used for the access terminals to execute the randomly generated delay tasks, and when the delay tasks of any access terminal are achieved, the access terminal sends a referral request to other access terminals;
the first processing submodule is used for receiving a referral reply replied by other access terminals based on the referral request after the referral request is sent by any access terminal;
and the first execution submodule is used for counting the number of the referral replies, and when the number of the referral replies is greater than a preset threshold value, any access terminal is elected as a monitoring terminal.
Optionally, the error analysis apparatus further includes:
the first monitoring submodule is used for embedding points in the task thread of each access terminal to generate a monitoring tool;
the second processing submodule is used for reading a plurality of user instructions in any task thread generation time period when the monitoring tool monitors that any task thread executes an error report;
and the second execution submodule is used for writing the task instructions into a DEL file according to a generation time sequence to generate the error reporting information and sending the error reporting information to the monitoring terminal.
Optionally, the error analysis apparatus further includes:
the first acquisition submodule is used for acquiring the operating environment of the target terminal, wherein the operating environment comprises the system type and the system configuration parameters of the target terminal;
the third processing submodule is used for calling a corresponding system file according to the system type and generating the mirror image file after configuring the system file according to the system configuration parameters;
and the third execution submodule is used for activating a plurality of containers according to the image file, sending the image file to the plurality of containers for test environment configuration, and then constructing the plurality of test containers.
Optionally, the error analysis apparatus further includes:
the first arrangement submodule is used for arranging the user instructions in the user behavior data according to a time sequence to generate an instruction time sequence;
the second generation submodule is used for generating an instruction topological graph according to the instruction time sequence, wherein the instruction topological graph comprises a plurality of instruction nodes, and each instruction node corresponds to one user instruction;
the fourth processing submodule is used for replacing any one instruction node in the instruction topological graph according to a preset replacement strategy to generate a plurality of correction topological graphs;
and the fourth execution submodule is used for generating a plurality of test cases according to the instruction logic of the plurality of corrected topological graphs and inputting the plurality of test cases into the plurality of test containers respectively.
Optionally, the error analysis apparatus further includes:
the second acquisition submodule is used for acquiring the test results output by the plurality of test containers;
the first selection submodule is used for selecting a test result which runs correctly in the test results as a target test result;
the fifth processing submodule is used for comparing and analyzing the user instruction represented by the correction topological graph corresponding to the target test result with the user behavior data to determine an error instruction in the user behavior data;
and the fifth execution submodule is used for generating an error reporting analysis result aiming at the error reporting information according to the error instruction.
Optionally, the error analysis apparatus further includes:
the sixth processing submodule is used for sending the error analysis result to the target terminal;
and the sixth execution submodule is used for prompting an error instruction of a user by the target terminal according to the error reporting analysis result and carrying out associated storage on the error reporting information and the error reporting analysis result.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, which includes a memory and a processor, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, cause the processor to execute the steps of the error analysis method.
The present invention also provides a computer storage medium, wherein the computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of the error analysis method.
The embodiment of the invention has the beneficial effects that: the method comprises the steps of acquiring error reporting information of an access terminal in a distributed system through a monitoring terminal, then constructing a plurality of test containers in the same operation environment as a target terminal sending the error reporting information in a virtual container mode, acquiring user behaviors when the error reporting information is triggered, correcting the user behaviors in the test containers in a targeted mode, converting the user behaviors into a plurality of test cases, and finally operating the test cases through the test containers to simulate the operation process of the target terminal.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of a basic flow chart of an error reporting analysis method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of determining a monitoring terminal according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating the generation of error messages according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of the construction of a test vessel according to one embodiment of the present application;
FIG. 5 is a flowchart illustrating a process of generating a test case and performing a test according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating the generation of error-reporting analysis results according to an embodiment of the present application;
fig. 7 is a flowchart illustrating sending an error analysis result to a target terminal according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a basic structure of an error reporting analysis apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, a "terminal" includes both devices that are wireless signal receivers, devices that have only wireless signal receivers without transmit capability, and devices that have receive and transmit hardware, devices that have receive and transmit hardware capable of performing two-way communication over a two-way communication link, as will be understood by those skilled in the art. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "terminal" used herein may also be a communication terminal, a web-enabled terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, etc.
Referring to fig. 1, fig. 1 is a schematic view illustrating a basic flow of an error reporting analysis method according to the present embodiment.
As shown in fig. 1, an error analysis method includes:
s110, monitoring error reporting information sent by any access terminal according to a preset monitoring terminal;
the error reporting analysis method in this embodiment is applicable to a distributed computer system. Each participating node in the distributed system is defined as an access terminal.
Each access terminal is monitored by the CAT system. CAT (Central Application tracking) is an open-source real-time distributed Application monitoring platform developed based on Java, and provides comprehensive monitoring service and business decision support. The CAT defines a basic monitoring model, which can be used for real-time monitoring and monitoring according to user-defined settings, for example, to perform distributed full-link tracking monitoring, monitoring information can be acquired by a buried point.
In this embodiment, a monitoring terminal is provided, and the monitoring terminal is obtained by referring to a plurality of access terminals. The specific way of the promotion is as follows: and after the last data period is finished, the last pushed monitoring terminal completes the monitoring task. All access terminals in the distributed system randomly generate a delay task. When the delay task of the access terminal is achieved, namely the delay task is cleared, the access terminal sends a referral request to other access terminals in the system, and the referral request aims to enable the other access terminals to agree to referral as monitoring terminals. After receiving the referral request, other access terminals agree with the first referral request received by the access terminals if the self referral request is not sent, and reject all the referral requests after replying. If the self referral request is sent after the self receives the referral request, the referral request of any terminal is not replied. The information corresponding to the referral request is a referral reply, and the access terminal receiving the referral reply is equivalent to obtain an election ticket. When the number of the received election tickets exceeds the general number of the whole terminal, the access terminal is automatically defined as a monitoring terminal and sends election results to other terminals, and after receiving the election results, the other access terminals stop sending referral requests and automatically convert the referral requests into common access terminals. The pushing mode of the monitoring terminal can avoid the phenomenon that the whole system is rushed after the monitoring terminal in the system is down, and the random conversion of the monitoring system can improve the concealment of the monitoring terminal and avoid the random attack of the monitoring terminal.
The monitoring terminal is used for receiving error reporting information in the whole system, wherein the error reporting information is that an execution error occurs when any access terminal in the system executes a user instruction of a user. And after the execution error occurs, the corresponding access terminal collects and packages the operation instruction of the user in the time period corresponding to the execution error to generate a data packet. In this embodiment, error information will be sent
After receiving the error information sent by any access terminal, the monitoring terminal needs to analyze the reason for the occurrence of the error information.
S120, extracting user behavior data of a target terminal corresponding to the error reporting information, wherein the target terminal is an access terminal for sending the error reporting information, and the user behavior data are a plurality of user instructions generated by user operation during generation of the error reporting information;
and after receiving the error reporting information, the monitoring terminal performs data analysis on the error reporting information to obtain user behavior data of the target terminal. The target terminal is an access terminal that sends error information. In order to determine that error information caused by the instruction of the user appears, the user instruction of the user within a period of time needs to be collected, and therefore, a plurality of user instructions of the user are obtained through analysis. The number of user instructions referred to in this embodiment is 2, 3, 4, or more.
S130, acquiring a terminal operating environment of the target terminal to generate an environment image file, and constructing a plurality of test containers according to the image file, wherein the plurality of test containers are all used for simulating the operating environment of the target terminal;
and after the monitoring terminal obtains the error report information, reading the running environment of the target terminal, wherein the running environment comprises the system type of the target terminal, system configuration parameters, an API (application programming interface) interface and an SDK (software development kit) interface of the task thread.
And generating an environment image file of the target terminal according to the system type (application program type) and the system configuration parameters of the target terminal. The environment image file makes a specific series of files into a single file according to a container executable format so as to facilitate the installation of a container simulating the environment.
After the environment image file is generated, a running instance for executing the environment file needs to be started, and the running instance is a test container. The test container will share the operating system/kernel of the host on which it is located. Namely, the test container may share the operating system/kernel of the monitoring terminal, and the test container may simulate the operating environment of the target terminal when executing the user instruction only by sending the system type and the system configuration parameters of the execution task in the target terminal to the test container.
In order to detect the system error caused by which user instruction, a plurality of user instructions issued by a user need to be replaced, and when a test case generated after a certain instruction is replaced can be smoothly executed, the replaced user instruction is the user instruction causing the error, so that the reason causing the system error is determined. Based on the above analysis process, when a test container is constructed, a plurality of test containers need to be constructed, and a plurality of adjusted test case cases are tested.
Each test container simulates the same operating environment, i.e. the operating environment of the target terminal.
S140, generating a plurality of test cases according to the user behavior data, and inputting the test cases into the test containers respectively, wherein the number of the test cases is the same as that of the test containers, and differences exist among the test cases;
after the test container is enabled, a test case required by the test is further required to be generated, wherein the test case refers to the description of a test task performed on a specific software product, and embodies a test scheme, a method, a technology and a strategy. The contents of the test object, the test environment, the input data, the test steps, the expected results, the test scripts and the like are included, and finally, a document is formed.
The test case in the present embodiment is generated from the user behavior data of the target terminal. Specifically, the method comprises the following steps: and carrying out time sequence arrangement on the user instructions in the user behavior data according to the issuing time to generate an instruction time sequence, namely arranging the user instructions in sequence according to the issuing time. After the arrangement is finished, the instruction time sequence is imaged, each user instruction is converted into a graph node, each image node represents one user instruction and is called an instruction node, and the instruction nodes are connected with one another to generate an instruction topological graph used for representing a user instruction set. Each instruction node in the instruction topological graph records a user instruction, and the user instruction in the instruction node is replaced by clicking the instruction node.
The specific alternative mode is as follows: each type of user command has a set of user commands that is the same or similar to the user command, or a set of alternative user commands that are statistically summarized from historical data. And only replacing the instruction of one instruction node in each group of test cases, selecting a replacing instruction from the instruction set corresponding to the instruction node during replacement, and replacing the instruction into the instruction node to complete the instruction replacement of one group of test cases.
In some embodiments, when there are multiple replacement instructions in the replacement instruction set corresponding to one instruction node, the replacement should be performed sequentially. For example, when there are 10 replacement instructions in the instruction set corresponding to the first instruction node, 10 test cases are generated at the first instruction node. And after the test case at the position of the first instruction node is generated, replacing the second instruction node and generating the test case, and so on until all the instruction nodes are replaced.
And after instruction node replacement, generating a corrected topological graph, representing a series of test instructions by the corrected topological graph, and packaging the test instructions to generate a test case corresponding to the corrected topological graph.
After a plurality of test cases are generated, each test case is correspondingly sent to one test container, and after each test container receives the test cases, user instructions in the test cases are executed through the constructed running environment of the target terminal to generate corresponding test results.
S150, generating an error reporting analysis result aiming at the error reporting information based on the test result of each test case run by the plurality of test containers.
And after the test case in each test container is executed, generating corresponding test results, wherein some of the test results may be results of test case execution failure, and some of the test results may be results of test case execution success.
And extracting the test cases in which the test cases are successfully executed, reading the correction topological graph corresponding to the test cases and the user instruction set corresponding to the correction topological graph, and comparing the user instruction set with the user instruction set represented by the user behavior data to determine an error user instruction which causes error reporting information in the target terminal. And generating an error reporting analysis result based on the error user instruction, wherein the error reporting analysis result records the error user instruction.
In some embodiments, the running result of the multiple test cases may be that all the test cases fail to run, and this situation illustrates the reason for causing the error information, not because of the user instruction, but because there is an error in the running environment of the target terminal.
In some embodiments, the running result of the multiple test cases may be that all the test cases run successfully, which indicates that the cause of the error report information is false trigger, and the target terminal runs without error.
In the above embodiment, the monitoring terminal collects the error reporting information of the access terminal in the distributed system, then, a plurality of test containers with the same operation environment as the target terminal sending the error reporting information are constructed in a virtual container mode, and collects the user behavior when triggering the error information, after correcting the user behavior in a targeted way, converting the test cases into a plurality of test cases, finally, operating the test cases through the test container to simulate the operation process of the target terminal, because the test cases are corrected and have difference with the user instructions of the original user behavior data, the difference can be combined with the operation result to determine which error instruction of the user causes error information, so that the time for processing the error information is reduced, manual error checking is replaced, and the efficiency for determining the error of the error information is improved.
In some embodiments, after a data cycle is finished, the distributed system needs to determine the monitoring terminals in the system again. Referring to fig. 2, fig. 2 is a schematic flow chart illustrating the determination of the monitoring terminal according to the embodiment.
As shown in fig. 2, before S110, the method includes:
s101, the access terminals execute a randomly generated delay task, and when the delay task of any access terminal is achieved, the access terminal sends a referral request to other access terminals;
and after the last data period is finished, the last pushed monitoring terminal completes the monitoring task. All access terminals in the distributed system randomly generate a delay task. When the delay task of the access terminal is achieved, namely the delay task is cleared, the access terminal sends a referral request to other access terminals in the system, and the referral request aims to enable the other access terminals to agree to referral as monitoring terminals.
S102, after the arbitrary access terminal sends the referral request, the referral reply based on the referral request reply of other access terminals is received;
after receiving the referral request, other access terminals agree with the first referral request received by the access terminals if the self referral request is not sent, and reject all the referral requests after replying. If the self referral request is sent after the self receives the referral request, the referral request of any terminal is not replied. The information corresponding to the referral request is a referral reply, and the access terminal receiving the referral reply is equivalent to obtain an election ticket.
S103, counting the number of the referral replies, and when the number of the referral replies is larger than a preset threshold value, electing any access terminal as a monitoring terminal.
When the number of the received election tickets exceeds the general number of the whole terminal, the access terminal is automatically defined as a monitoring terminal and sends election results to other terminals, and after receiving the election results, the other access terminals stop sending referral requests and automatically convert the referral requests into common access terminals. The pushing mode of the monitoring terminal can avoid the phenomenon that the whole system is rushed after the monitoring terminal in the system is down, and the random conversion of the monitoring system can improve the concealment of the monitoring terminal and avoid the random attack of the monitoring terminal.
In this embodiment, the preset threshold is half of the number of access terminals in the distributed system. However, the value of the preset threshold is not limited to this, and in some embodiments, the value of the preset threshold can be set in a user-defined manner according to different application scenarios.
In some embodiments, the access terminal monitors the error reporting task by using a buried point monitoring method, and then generates corresponding error reporting information according to the error reporting task. Referring to fig. 3, fig. 3 is a schematic flow chart illustrating the generation of error information according to the present embodiment.
As shown in fig. 3, S110 includes:
s111, embedding points in task threads of each access terminal to generate a monitoring tool;
and embedding points in an access terminal accessed into the distributed system to generate a monitoring tool. The listening tool is actually a hook mounted on the access terminal. When the access terminal runs the task thread of the application program, a subprogram (hook) is set on the task thread, and the subprogram is used for monitoring the execution condition of the task thread of the application program. Then, setting the triggering condition of the subprogram as starting when the task thread of the application program is executed in error, and finishing the embedded point monitoring of each access terminal.
S112, when the monitoring tool monitors that any task thread executes the error report, reading a plurality of user instructions in a generating time period of the task thread;
when the monitoring tool arranged in the access terminal monitors that any task thread has an error in the process of executing the task, the monitoring tool is triggered immediately. And further reading the running environment, the environment parameters and the calling interface of the task thread for executing the task. The parameters are used for simulating the operation environment of the target terminal by the later distributed system.
When the monitoring tool is triggered, a user instruction issued by a user when the error reporting task thread is generated needs to be extracted. That is, the task thread is generated based on the user instruction, and when the task thread has an error, the user instruction for issuing the task thread needs to be called. Further, the execution error of the task thread is related to the user instruction for issuing the task thread, and also related to the task instruction other than the user instruction, for example, after the user clicks and starts a certain application program, the user immediately clicks a page function of the application program, and the application program is down. Therefore, it is necessary to collect a plurality of user instructions within the task thread generation period. Specifically, the time of issuing the task thread is taken as a midpoint, and user instructions in 2s before and after the time are collected. However, the acquisition duration of the acquisition time period can be set by self according to the actual needs of the application scene.
S113, writing the plurality of task instructions into a DEL file according to a generation sequence to generate the error reporting information, and sending the error reporting information to the monitoring terminal.
The collected task instructions are written into a DEL file (Creator Simulator structured Module file) at the creation timing. A DEL file is a data list file. Therefore, each user instruction of the user can be sequentially written into the DEL file according to the time of generation. In some embodiments, in addition to writing user instructions to the DEL file, information such as the runtime environment, environment parameters, and call interfaces under which the task threads perform tasks are also written to the DEL file.
When the DEL file writing is completed, the error information is obtained, and the access terminal corresponding to the error information sends the error information to the monitoring terminal.
In some embodiments, after receiving the error reporting information sent by the target terminal, the monitoring terminal needs to start and configure the test container. Referring to fig. 4, fig. 4 is a schematic flow chart of the construction of the test container according to the present embodiment.
S130 as shown in fig. 4 includes:
s131, collecting the operation environment of the target terminal, wherein the operation environment comprises the system type and the system configuration parameters of the target terminal;
and after the monitoring terminal obtains the error report information, reading the running environment of the target terminal, wherein the running environment comprises the system type of the target terminal, system configuration parameters, an API (application programming interface) interface and an SDK (software development kit) interface of the task thread.
The monitoring terminal collects the operation environment of the target terminal in the following mode: and extracting the error information directly from the error reporting information sent by the target terminal. The file format of the error reporting information is DEL file, and the file records information such as running environment, environment parameters, calling interface and the like. Wherein, the interface of calling includes: an API interface and an SDK interface of the task thread.
In some embodiments, the monitoring terminal can also collect the running environment of the target terminal by calling a task log of the target terminal. In the execution running state, the target terminal writes various terminal parameters into the task log, wherein the terminal parameters include the running environment of the target terminal, namely the system type of the target terminal, the system configuration parameters, the API interface and the SDK interface of the task thread. After the monitoring terminal calls the task log of the target terminal, the running environment of the target terminal can be acquired through the corresponding field.
S132, calling a corresponding system file according to the system type, configuring the system file according to the system configuration parameters, and generating the image file;
and generating an environment image file of the target terminal according to the system type (application program type) and the system configuration parameters of the target terminal. The environment image file makes a specific series of files into a single file according to a container executable format so as to facilitate the installation of a container simulating the environment.
The system type corresponds to the type of the application program corresponding to the wrong task thread executed by the target terminal. Therefore, when the system type is obtained, the system file (installation package) of the application program can be obtained. After the installation package of the program is obtained, various parameters in the installation package need to be configured, so that the operation environment of the target terminal can be adapted to when the installation package is operated. And after the parameter configuration of the installation package is finished, performing format conversion and compression on the installation package to generate an image file.
S133, activating a plurality of containers according to the image file, sending the image file to the plurality of containers for test environment configuration, and constructing the plurality of test containers.
After the environment image file is generated, a running instance for executing the environment file needs to be started, and the running instance is a test container. The test container will share the operating system/kernel of the host on which it is located. Namely, the test container may share the operating system/kernel of the monitoring terminal, and the test container may simulate the operating environment of the target terminal when executing the user instruction only by sending the system type and the system configuration parameters of the execution task in the target terminal to the test container. It should be noted that a container in a blank state is pre-constructed in the monitoring terminal, after the image file is generated, the monitoring terminal activates and starts the container, after the blank container starts to run, the image file is sent to the blank container for program installation, and the container for installing the image file becomes a test container.
Compared with an independent terminal, the test container is lighter, the container is used as a test unit, the test cost can be reduced, and the test container is the basis for the practice of the implementation method.
In some embodiments, after the test container is generated, a test case for testing needs to be correspondingly generated, where the test case is converted from a replaced user instruction, and an erroneous user instruction that causes error information is searched in a way of eliminating the method. Referring to fig. 5, fig. 5 is a schematic flow chart illustrating the generation and testing of the test case according to the present embodiment.
As shown in fig. 5, S140 includes:
141. arranging user instructions in the user behavior data according to a time sequence to generate an instruction time sequence;
and carrying out time sequence arrangement on the user instructions in the user behavior data according to the issuing time to generate an instruction time sequence, namely arranging the user instructions in sequence according to the issuing time.
S142, generating an instruction topological graph according to the instruction timing sequence, wherein the instruction topological graph comprises a plurality of instruction nodes, and each instruction node corresponds to one user instruction;
after the arrangement is finished, the instruction time sequence is imaged, each user instruction is converted into a graph node, each image node represents one user instruction and is called an instruction node, and the instruction nodes are connected with one another to generate an instruction topological graph used for representing a user instruction set.
S143, replacing any one instruction node in the instruction topological graph according to a preset replacement strategy to generate a plurality of correction topological graphs;
each instruction node in the instruction topological graph records a user instruction, and the user instruction in the instruction node is replaced by clicking the instruction node.
The specific alternative mode is as follows: each type of user command has a set of user commands that is the same or similar to the user command, or a set of alternative user commands that are statistically summarized from historical data. And only replacing the instruction of one instruction node in each group of test cases, selecting a replacing instruction from the instruction set corresponding to the instruction node during replacement, and replacing the instruction into the instruction node to complete the instruction replacement of one group of test cases.
In some embodiments, when there are multiple replacement instructions in the replacement instruction set corresponding to one instruction node, the replacement should be performed sequentially. For example, when there are 10 replacement instructions in the instruction set corresponding to the first instruction node, 10 test cases are generated at the first instruction node. And after the test case at the position of the first instruction node is generated, replacing the second instruction node and generating the test case, and so on until all the instruction nodes are replaced.
It should be noted that when the number of the corrected topological graphs generated by replacement is more than the number of the opened test containers, the test containers need to be continuously supplemented to make the number of the corrected topological graphs the same as that of the corrected topological graphs. Therefore, the process of generating test cases is dynamic, and the steps of S131 to S133 can be performed before or after this step.
And S144, generating a plurality of test cases according to the instruction logics of the plurality of corrected topological graphs, and inputting the plurality of test cases into the plurality of test containers respectively.
And after instruction node replacement, generating a corrected topological graph, representing a series of test instructions by the corrected topological graph, and packaging the test instructions to generate a test case corresponding to the corrected topological graph.
After a plurality of test cases are generated, each test case is correspondingly sent to one test container, and after each test container receives the test cases, user instructions in the test cases are executed through the constructed running environment of the target terminal to generate corresponding test results.
In some embodiments, after the test containers complete the running of the test cases, an error analysis report needs to be generated according to the test results of the test containers. Referring to fig. 6, fig. 6 is a schematic flow chart illustrating the generation of the error analysis result according to the present embodiment.
As shown in fig. 6, S150 includes:
s151, collecting test results output by the plurality of test containers;
and after the test case in each test container is executed, generating corresponding test results, wherein some of the test results may be results of test case execution failure, and some of the test results may be results of test case execution success.
S152, selecting a test result which runs correctly in the test results as a target test result;
and extracting the test cases in which the test cases are successfully executed, and reading the corresponding correction topological graph. And defining the test result of the test case which is successfully executed as the target test result.
S153, comparing and analyzing the user instruction represented by the correction topological graph corresponding to the target test result with the user behavior data, and determining an error instruction in the user behavior data;
and extracting the test cases in which the test cases are successfully executed, reading the correction topological graph corresponding to the test cases and the user instruction set corresponding to the correction topological graph, and comparing the user instruction set with the user instruction set represented by the user behavior data to determine an error user instruction which causes error reporting information in the target terminal. And generating an error reporting analysis result based on the error user instruction, wherein the error instruction is recorded in the error reporting analysis result.
And S154, generating an error reporting analysis result aiming at the error reporting information according to the error command.
And after the error instruction is obtained through analysis, storing the error instruction in a blank analysis document to generate an error reporting analysis result. In some embodiments, the error analysis result is visually presented according to different application scenarios, specifically: calling the instruction topological graph, searching the instruction nodes of the error instruction in the instruction topological graph, and displaying the instruction nodes in a differentiated mode to enable the color or the shape of the instruction node graph to be different from the shapes of other instruction nodes. For example. And rendering the graph of the instruction node corresponding to the error instruction to be red when the graph of the normal instruction node is black. Or, the normal instruction node is a quadrangle, and the instruction node graph corresponding to the error instruction is set to be a quadrangle. After the instruction node image corresponding to the error instruction is displayed in a differentiated mode, connecting lines extend out in the transverse direction of the instruction node image, an instruction node image is generated at the end portion of each connecting line, and a correct substitute instruction corresponding to the error instruction is displayed at the extending instruction node. And the graphical display can make the error reporting analysis result more visual, and simultaneously, the user is convenient to be prompted to correct the error instruction.
The test result is not limited to the above-mentioned one, and in some embodiments, the operation result of the multiple test cases may be that all test cases fail to operate, which indicates that the error information is caused, and there is an error in the operation environment of the target terminal, not because of the user instruction.
In some embodiments, the running result of the multiple test cases may be that all the test cases run successfully, which indicates that the cause of the error report information is false trigger, and the target terminal runs without error.
And if the test result has the two conditions, writing the corresponding error reason into a blank analysis document to generate an error reporting analysis result.
In some embodiments, after the error analysis result is generated, the error analysis result needs to be sent to the target terminal. Referring to fig. 7, fig. 7 is a flowchart illustrating a process of sending an error analysis result to a target terminal according to the present embodiment.
As shown in fig. 7, after S150, the method includes:
s161, sending the error analysis result to the target terminal;
and after generating an error reporting analysis result, the monitoring terminal sends the error reporting analysis result to the target terminal. Specifically, the method comprises the following steps: and the monitoring terminal carries out addressing according to the IP address of the target terminal, and sends the error-reporting analysis result to the target terminal through the link when the addressing is linked to the target terminal.
And S162, the target terminal prompts an error instruction of a user according to the error reporting analysis result, and stores the error reporting information and the error reporting analysis result in a correlation manner.
And after receiving the error reporting analysis result, the target terminal prompts the user according to the error instruction recorded in the error reporting analysis result, wherein the prompting mode is a popup window mode to prompt the user of the error instruction. After the user is prompted, the error information and the error analysis result are stored in an associated manner, that is, the error information and the error analysis result are stored in a key-value pair manner, and a local error database is constructed. And when the error reporting information appears again, the error reporting information is retrieved in the local error reporting database, the error reporting information is uploaded to the monitoring terminal after the retrieval is not successful, and the user prompt is directly performed after the corresponding error reporting analysis result is retrieved in the local error reporting database without sending the error reporting information to the monitoring terminal.
And a local error reporting database is established at the target terminal through the error reporting analysis result of the monitoring terminal, so that the calculation power of the target terminal can be saved, the calculation power advantage of the monitoring terminal is fully exerted, and the function complementation is realized. And further improves the processing efficiency of error reporting.
In order to solve the above technical problem, an embodiment of the present invention further provides an error reporting analysis apparatus. Referring to fig. 8, fig. 8 is a schematic diagram of a basic structure of the error reporting analysis apparatus according to the present embodiment.
As shown in fig. 8, an error report analysis apparatus includes: a listening module 110, an extraction module 120, an acquisition module 130, a processing module 140, and an execution module 150. The monitoring module 110 is configured to monitor error reporting information sent by any access terminal according to a preset monitoring terminal; the extracting module 120 is configured to extract user behavior data of a target terminal corresponding to the error reporting information, where the target terminal is an access terminal that sends the error reporting information, and the user behavior data is a plurality of user instructions generated by user operations when the error reporting information is generated; the acquisition module 130 is configured to acquire a terminal operating environment of the target terminal to generate an environment image file, and construct a plurality of test containers according to the image file, where the plurality of test containers are all used to simulate the operating environment of the target terminal; the processing module 140 is configured to generate a plurality of test cases according to the user behavior data, and input the plurality of test cases into the plurality of test containers respectively, where the number of the test cases is the same as the number of the test containers, and there is a difference between the test cases; the execution module 150 is configured to generate an error reporting analysis result for the error reporting information based on the test result of each test case run by the plurality of test containers.
The error reporting analysis device collects the error reporting information of the access terminal in the distributed system through the monitoring terminal, then, a plurality of test containers with the same operation environment as the target terminal sending the error reporting information are constructed in a virtual container mode, and collects the user behavior when triggering the error information, after correcting the user behavior in a targeted way, converting the test cases into a plurality of test cases, finally, operating the test cases through the test container to simulate the operation process of the target terminal, because the test cases are corrected and have difference with the user instructions of the original user behavior data, the difference can be combined with the operation result to determine which error instruction of the user causes error information, so that the time for processing the error information is reduced, manual error checking is replaced, and the efficiency for determining the error of the error information is improved.
In some embodiments, the error reporting analysis apparatus further includes:
the first generation submodule is used for the access terminals to execute the randomly generated delay tasks, and when the delay tasks of any access terminal are achieved, the access terminal sends a referral request to other access terminals;
the first processing submodule is used for receiving a referral reply replied by other access terminals based on the referral request after the referral request is sent by any access terminal;
and the first execution submodule is used for counting the number of the referral replies, and when the number of the referral replies is greater than a preset threshold value, any access terminal is elected as a monitoring terminal.
In some embodiments, the error analysis apparatus further comprises:
the first monitoring submodule is used for embedding points in the task thread of each access terminal to generate a monitoring tool;
the second processing submodule is used for reading a plurality of user instructions in any task thread generation time period when the monitoring tool monitors that any task thread executes an error report;
and the second execution submodule is used for writing the task instructions into a DEL file according to a generation time sequence to generate the error reporting information and sending the error reporting information to the monitoring terminal.
In some embodiments, the error analysis apparatus further comprises:
the first acquisition submodule is used for acquiring the operating environment of the target terminal, wherein the operating environment comprises the system type and the system configuration parameters of the target terminal;
the third processing submodule is used for calling a corresponding system file according to the system type and generating the mirror image file after configuring the system file according to the system configuration parameters;
and the third execution submodule is used for activating a plurality of containers according to the image file, sending the image file to the plurality of containers for test environment configuration, and then constructing the plurality of test containers.
In some embodiments, the error analysis apparatus further comprises:
the first arrangement submodule is used for arranging the user instructions in the user behavior data according to a time sequence to generate an instruction time sequence;
the second generation submodule is used for generating an instruction topological graph according to the instruction time sequence, wherein the instruction topological graph comprises a plurality of instruction nodes, and each instruction node corresponds to one user instruction;
the fourth processing submodule is used for replacing any one instruction node in the instruction topological graph according to a preset replacement strategy to generate a plurality of correction topological graphs;
and the fourth execution submodule is used for generating a plurality of test cases according to the instruction logic of the plurality of corrected topological graphs and inputting the plurality of test cases into the plurality of test containers respectively.
In some embodiments, the error reporting analysis apparatus further includes:
the second acquisition submodule is used for acquiring the test results output by the plurality of test containers;
the first selection submodule is used for selecting a test result which runs correctly in the test results as a target test result;
the fifth processing submodule is used for comparing and analyzing the user instruction represented by the correction topological graph corresponding to the target test result with the user behavior data to determine an error instruction in the user behavior data;
and the fifth execution submodule is used for generating an error reporting analysis result aiming at the error reporting information according to the error instruction.
In some embodiments, the error reporting analysis apparatus further includes:
the sixth processing submodule is used for sending the error analysis result to the target terminal;
and the sixth execution submodule is used for prompting an error instruction of a user by the target terminal according to the error reporting analysis result and carrying out associated storage on the error reporting information and the error reporting analysis result.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device. Referring to fig. 9, fig. 9 is a block diagram of a basic structure of a computer device according to the present embodiment.
As shown in fig. 9, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions, when executed by the processor, can enable the processor to implement an error analysis method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a method of error analysis. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of the monitoring module 110, the extracting module 120, the collecting module 130, the processing module 140, and the executing module 150 in fig. 8, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data necessary for executing all the sub-modules in the error reporting analysis device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The computer device collects the error reporting information of the access terminal in the distributed system through the monitoring terminal, then, a plurality of test containers with the same operation environment as the target terminal sending the error reporting information are constructed in a virtual container mode, and collects the user behavior when triggering the error information, after correcting the user behavior in a targeted way, converting the test cases into a plurality of test cases, finally, operating the test cases through the test container to simulate the operation process of the target terminal, because the test cases are corrected and have difference with the user instructions of the original user behavior data, the difference can be combined with the operation result to determine which error instruction of the user causes error information, so that the time for processing the error information is reduced, manual error checking is replaced, and the efficiency for determining the error of the error information is improved.
The present invention also provides a computer storage medium having computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of any of the above-described error-reporting analysis methods.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. An error reporting analysis method, comprising:
monitoring error reporting information sent by any access terminal according to a preset monitoring terminal;
extracting user behavior data of a target terminal corresponding to the error reporting information, wherein the target terminal is an access terminal for sending the error reporting information, and the user behavior data is a plurality of user instructions generated by user operation during generation of the error reporting information;
acquiring a terminal operating environment of the target terminal to generate an environment image file, and constructing a plurality of test containers according to the image file, wherein the plurality of test containers are all used for simulating the operating environment of the target terminal;
generating a plurality of test cases according to the user behavior data, and inputting the test cases into the test containers respectively, wherein the number of the test cases is the same as that of the test containers, and differences exist among the test cases;
and generating an error reporting analysis result aiming at the error reporting information based on the test result of each test case run by the plurality of test containers.
2. The error reporting analysis method of claim 1, wherein before the monitoring the error reporting information sent by any access terminal according to the preset monitoring terminal, the method comprises:
the access terminal executes a randomly generated delay task, and when the delay task of any access terminal is achieved, the access terminal sends a referral request to other access terminals;
after the arbitrary access terminal sends the referral request, the referral reply based on the referral request reply of other access terminals is received;
and counting the number of the referral replies, and when the number of the referral replies is greater than a preset threshold value, electing any access terminal as a monitoring terminal.
3. The method of claim 1, wherein the monitoring error information sent by any access terminal according to a preset monitoring terminal comprises:
embedding points in a task thread of each access terminal to generate a monitoring tool;
when the monitoring tool monitors that any task thread executes an error report, reading a plurality of user instructions in a generating time period of the task thread;
and writing the plurality of task instructions into a DEL file according to a generation time sequence to generate the error reporting information, and sending the error reporting information to the monitoring terminal.
4. The error reporting analysis method of claim 1, wherein the collecting the terminal operating environment of the target terminal generates an environment image file, and constructing a plurality of test containers according to the image file comprises:
acquiring an operating environment of the target terminal, wherein the operating environment comprises a system type and system configuration parameters of the target terminal;
calling a corresponding system file according to the system type, and configuring the system file according to the system configuration parameters to generate the image file;
and activating a plurality of containers according to the image file, sending the image file to the plurality of containers for test environment configuration, and constructing the plurality of test containers.
5. The error reporting analysis method of claim 1, wherein the generating a plurality of test cases according to the user behavior data and inputting the plurality of test cases into the plurality of test containers respectively comprises:
arranging user instructions in the user behavior data according to a time sequence to generate an instruction time sequence;
generating an instruction topological graph according to the instruction timing sequence, wherein the instruction topological graph comprises a plurality of instruction nodes, and each instruction node corresponds to one user instruction;
replacing any one instruction node in the instruction topological graph according to a preset replacement strategy to generate a plurality of correction topological graphs;
and generating a plurality of test cases according to the instruction logics of the plurality of corrected topological graphs, and respectively inputting the plurality of test cases into the plurality of test containers.
6. The error analysis method of claim 5, wherein the generating an error analysis result for the error information based on the test result of each test case run by the plurality of test containers comprises:
collecting test results output by the plurality of test containers;
selecting a test result which runs correctly in the test results as a target test result;
comparing and analyzing the user instruction represented by the correction topological graph corresponding to the target test result with the user behavior data, and determining an error instruction in the user behavior data;
and generating an error reporting analysis result aiming at the error reporting information according to the error instruction.
7. The method according to claim 1, wherein the generating an error report analysis result for the error report information based on the test result of each test case run by the plurality of test containers comprises:
sending the error reporting analysis result to the target terminal;
and the target terminal prompts an error instruction of a user according to the error reporting analysis result and stores the error reporting information and the error reporting analysis result in a correlation manner.
8. An error reporting analysis apparatus, comprising:
the monitoring module is used for monitoring error reporting information sent by any access terminal according to a preset monitoring terminal;
the extraction module is used for extracting user behavior data of a target terminal corresponding to the error reporting information, wherein the target terminal is an access terminal for sending the error reporting information, and the user behavior data is a plurality of user instructions generated by user operation during generation of the error reporting information;
the acquisition module is used for acquiring a terminal operating environment of the target terminal to generate an environment image file and constructing a plurality of test containers according to the image file, wherein the plurality of test containers are all used for simulating the operating environment of the target terminal;
the processing module is used for generating a plurality of test cases according to the user behavior data and inputting the test cases into the test containers respectively, wherein the number of the test cases is the same as that of the test containers, and differences exist among the test cases;
and the execution module is used for generating an error reporting analysis result aiming at the error reporting information based on the test result of each test case run by the plurality of test containers.
9. A computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions which, when executed by the processor, cause the processor to carry out the steps of the error analysis method according to any one of claims 1 to 7.
10. A computer storage medium, wherein the computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of the error analysis method of any one of claims 1 to 7.
CN202111555261.6A 2021-12-17 2021-12-17 Error reporting analysis method and device, computer equipment and storage medium Pending CN114168429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111555261.6A CN114168429A (en) 2021-12-17 2021-12-17 Error reporting analysis method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111555261.6A CN114168429A (en) 2021-12-17 2021-12-17 Error reporting analysis method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114168429A true CN114168429A (en) 2022-03-11

Family

ID=80487387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111555261.6A Pending CN114168429A (en) 2021-12-17 2021-12-17 Error reporting analysis method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114168429A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765026A (en) * 2019-10-31 2020-02-07 北京东软望海科技有限公司 Automatic testing method and device, storage medium and equipment
CN117290458A (en) * 2023-11-27 2023-12-26 潍坊威龙电子商务科技有限公司 Spatial database engine system, method, computer device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765026A (en) * 2019-10-31 2020-02-07 北京东软望海科技有限公司 Automatic testing method and device, storage medium and equipment
CN117290458A (en) * 2023-11-27 2023-12-26 潍坊威龙电子商务科技有限公司 Spatial database engine system, method, computer device and storage medium
CN117290458B (en) * 2023-11-27 2024-03-19 潍坊威龙电子商务科技有限公司 Spatial database engine system, method, computer device and storage medium

Similar Documents

Publication Publication Date Title
US8516499B2 (en) Assistance in performing action responsive to detected event
CN111143163B (en) Data monitoring method, device, computer equipment and storage medium
CN107562556B (en) Failure recovery method, recovery device and storage medium
CN114168429A (en) Error reporting analysis method and device, computer equipment and storage medium
US8225142B2 (en) Method and system for tracepoint-based fault diagnosis and recovery
CN111897724B (en) Automatic testing method and device suitable for cloud platform
US10795793B1 (en) Method and system for simulating system failures using domain-specific language constructs
US20210286614A1 (en) Causality determination of upgrade regressions via comparisons of telemetry data
CN110471945B (en) Active data processing method, system, computer equipment and storage medium
US20180143897A1 (en) Determining idle testing periods
CN113778879B (en) Interface fuzzy test method and device
CN112650688A (en) Automated regression testing method, associated device and computer program product
US11169910B2 (en) Probabilistic software testing via dynamic graphs
CN111090593A (en) Method, device, electronic equipment and storage medium for determining crash attribution
CN112559525B (en) Data checking system, method, device and server
CN112181784A (en) Code fault analysis method and system based on bytecode injection
CN111130882A (en) Monitoring system and method of network equipment
CN115982049A (en) Abnormity detection method and device in performance test and computer equipment
CN115525392A (en) Container monitoring method and device, electronic equipment and storage medium
CN115454420A (en) Artificial intelligence algorithm model deployment system, method, equipment and storage medium
Buga et al. Towards modeling monitoring of smart traffic services in a large-scale distributed system
CN116089446A (en) Optimization control method and device for structured query statement
CN109669867B (en) Test apparatus, automated test method, and computer-readable storage medium
CN112131180A (en) Data reporting method and device and storage medium
Gunasekaran et al. Correlating log messages for system diagnostics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination