CN116225956A - Automated testing method, apparatus, computer device and storage medium - Google Patents

Automated testing method, apparatus, computer device and storage medium Download PDF

Info

Publication number
CN116225956A
CN116225956A CN202310286191.1A CN202310286191A CN116225956A CN 116225956 A CN116225956 A CN 116225956A CN 202310286191 A CN202310286191 A CN 202310286191A CN 116225956 A CN116225956 A CN 116225956A
Authority
CN
China
Prior art keywords
text
test
acquiring
word
word segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310286191.1A
Other languages
Chinese (zh)
Inventor
何佳燚
陈维婉
王晓力
关杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310286191.1A priority Critical patent/CN116225956A/en
Publication of CN116225956A publication Critical patent/CN116225956A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Abstract

The application relates to an automatic test method, an automatic test device, computer equipment and a storage medium, and relates to the technical field of computers. Can be used in the field of financial science and technology or other related fields. The method comprises the following steps: displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text; acquiring a plurality of operation instruction information which is bound with a test operation text in advance, and acquiring the corresponding operation object position of a test operation object represented by the test object text in a display page of a service system; executing each operation instruction information at the position of the operation object in the display page of the service system, acquiring the operation result of each operation instruction information, and displaying the operation result in the automatic test interface. By adopting the method, the automatic test efficiency of the service system can be improved.

Description

Automated testing method, apparatus, computer device and storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to an automated testing method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of computer technology, an automatic test technology for a service system appears, and a user can write an automatic test script in advance, so that an automatic test flow for the service system is realized by setting the automatic test script in the test system and running the automatic test script.
In the prior art, an automatic test process of a service system generally needs to call an automatic test script according to the service flow steps of the service system to realize automatic test, however, the process needs to be deeply understood by a tester on the service flow steps of the service system to accurately carry out the automatic test process, so that the number of testers meeting the test requirement is less and the automatic test efficiency of the service system is lower.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an automated testing method, apparatus, computer device, computer readable storage medium, and computer program product that can improve the efficiency of automated testing.
In a first aspect, the present application provides an automated testing method. The method comprises the following steps:
Displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
acquiring a plurality of operation instruction information which is bound with the test operation text in advance, and acquiring the corresponding operation object position of the test operation object represented by the test object text in a display page of the service system;
executing each piece of operation instruction information at the operation object position in the display page of the service system, acquiring an operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
In one embodiment, the obtaining at least one test operation text according to the test text information and the test object text corresponding to the test operation text includes: acquiring a plurality of target text word segments corresponding to the test text information, and acquiring word segment types corresponding to the target text word segments and association relations of the target text word segments; acquiring a first text word of a verb type represented by a word type in the target text word, and taking the first text word as the test operation text under the condition that the first text word belongs to a preset operation text word library; the operation text word segmentation library is pre-stored with text word segmentation corresponding to a plurality of test operations; acquiring a second text word associated with the test operation text according to the association relation, and taking the second text word as a test object text corresponding to the test operation text; the second text word is the text word of the noun type represented by the word type in the target text word.
In one embodiment, the obtaining the plurality of target text word segments corresponding to the test text information includes: acquiring a plurality of first initial text fragments corresponding to the test text information through a forward maximum matching algorithm, and acquiring a plurality of second initial text fragments corresponding to the test text information through a reverse maximum matching algorithm; and acquiring the target text word from the first initial text word or the second initial text word.
In one embodiment, the obtaining the target text word from the plurality of first initial text words or the plurality of second initial text words includes: acquiring the first word segmentation matching iteration times of the first initial text word segmentation and the second word segmentation matching iteration times of the second initial text word segmentation; under the condition that the matching iteration times of the first word segmentation and the second word segmentation are different, using the initial text word segmentation with smaller word segmentation matching iteration times as the target text word segmentation; under the condition that the matching iteration times of the first word segmentation and the second word segmentation are the same, acquiring the first word segmentation number of the first initial text word segmentation and the second word segmentation number of the second initial text word segmentation; under the condition that the number of the first word segmentation is different from the number of the second word segmentation, the initial text word segmentation with smaller word segmentation number is used as the target text word segmentation; and under the condition that the number of the first word segmentation is the same as the number of the second word segmentation, the second initial text word segmentation is used as the target text word segmentation.
In one embodiment, before the obtaining the position of the corresponding operation object in the display page of the service system, the method further includes: acquiring text content corresponding to a text image contained in a display page of the service system and a text position of the text image in the display page; acquiring an operation object area contained in a display page of the service system and an area position of the operation object area in the display page; constructing a corresponding relation between the text content and the region position according to the text position and the region position; the step of obtaining the corresponding operation object position of the test operation object represented by the test object text in the display page of the service system comprises the following steps: and acquiring a target area position corresponding to the text content of the test object text according to the corresponding relation between the text content and the area position, and taking the target area position as the operation object position.
In one embodiment, the constructing the correspondence between the text content and the region position according to the text position and the region position includes: acquiring current text content, a current text center position corresponding to the current text content and a region center position of each operation object region in the display page; determining the position difference between the current text center position and each region center position, and taking the operation object region with the smallest position difference as the current operation object region corresponding to the current text content; and constructing the corresponding relation between the current text content and the region position of the current operation object region.
In one embodiment, after the operation object area with the smallest position difference is used as the current operation object area corresponding to the current text content, the method further includes: combining the current text content with preset text content to form combined text content under the condition that the position difference value of the current text content and the current operation object area is smaller than a preset difference value threshold value; and constructing the corresponding relation between the combined text content and the region position of the current operation object region.
In one embodiment, the obtaining text content corresponding to a text image included in a presentation page of the service system and a text position of the text image in the presentation page includes: performing contour recognition on the page image of the display page, acquiring text images contained in the page image according to a contour recognition result, and acquiring text positions of the text images in the display page; inputting each text image into a pre-trained text recognition model, extracting text characteristics of the text image through the text recognition model, and extracting text sequence of each text image; and obtaining text content corresponding to the text image based on the text characteristics and the text sequence.
In one embodiment, the test operation text includes an input operation text, and the plurality of operation instruction information bound to the test operation text includes: clicking instruction information and inputting the instruction information; executing each piece of operation instruction information at the operation object position in the display page of the service system, wherein the operation instruction information comprises the following steps: executing a click command at the position of the operation object in the display page, and acquiring data to be input according to a data table bound with the position of the operation object; and inputting the data to be input to the operation object position.
In one embodiment, the automated test interface further displays object description information of each test operation object included in a display page of the service system; before the data to be input is acquired according to the data table bound with the position of the operation object, the method further comprises: responding to a data table binding operation triggered by describing information aiming at a target object in the automatic test interface, and acquiring a target data table corresponding to the data table binding operation; and binding the operation object position of the test operation object described by the target object description information with the target data table.
In a second aspect, the present application also provides an automated testing apparatus. The device comprises:
the test text input module is used for displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
the operation information acquisition module is used for acquiring a plurality of operation instruction information which is bound with the test operation text in advance and acquiring the corresponding operation object position of the test operation object represented by the test object text in the display page of the service system;
the operation result display module is used for executing each piece of operation instruction information at the operation object position in the display page of the service system, obtaining the operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
acquiring a plurality of operation instruction information which is bound with the test operation text in advance, and acquiring the corresponding operation object position of the test operation object represented by the test object text in a display page of the service system;
executing each piece of operation instruction information at the operation object position in the display page of the service system, acquiring an operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
Acquiring a plurality of operation instruction information which is bound with the test operation text in advance, and acquiring the corresponding operation object position of the test operation object represented by the test object text in a display page of the service system;
executing each piece of operation instruction information at the operation object position in the display page of the service system, acquiring an operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
acquiring a plurality of operation instruction information which is bound with the test operation text in advance, and acquiring the corresponding operation object position of the test operation object represented by the test object text in a display page of the service system;
Executing each piece of operation instruction information at the operation object position in the display page of the service system, acquiring an operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
The automatic test method, the automatic test device, the computer equipment, the storage medium and the computer program product are characterized in that an automatic test interface is displayed, test text information input in the automatic test interface aiming at a service system to be tested is obtained, at least one test operation text is obtained according to the test text information, and a test object text corresponding to the test operation text is obtained; acquiring a plurality of operation instruction information which is bound with a test operation text in advance, and acquiring the corresponding operation object position of a test operation object represented by the test object text in a display page of a service system; executing each operation instruction information at the position of the operation object in the display page of the service system, acquiring the operation result of each operation instruction information, and displaying the operation result in the automatic test interface. According to the method and the system, through displaying the automatic test interface, a user only needs to input test text information in the automatic test interface, then a plurality of bound operation instructions can be determined according to test operation texts in the test text information, and operation instruction execution is carried out on corresponding operation object positions of test operation objects represented by the test object texts in a display page of a service system, so that operation results are displayed in the automatic test interface, the user can realize automatic test of the service system only by inputting the test texts in the mode, meanwhile, the user does not need to know all service flow steps of the service system deeply because the test operation texts are bound with the plurality of operation instructions in advance, and can realize automatic test of the service system only by roughly knowing the test steps, so that the test requirements of test personnel can be reduced, the number of the test personnel can be increased, and the automatic test efficiency of the service system is improved.
Drawings
FIG. 1 is a flow diagram of an automated test method in one embodiment;
FIG. 2 is a flow diagram of acquiring test operation text and test object text in one embodiment;
FIG. 3 is a flow diagram of obtaining target text tokens in one embodiment;
FIG. 4 is a flow chart of a method for establishing a correspondence between text content and location of a region in one embodiment;
FIG. 5 is a flow chart of a method for establishing a correspondence between text content and location of a region in one embodiment;
FIG. 6 is a flow diagram of acquiring text content and text position of a text image in one embodiment;
FIG. 7 is a flow chart of automated testing based on image recognition and natural language processing in one embodiment;
FIG. 8 is a flow diagram of image recognition in one embodiment;
FIG. 9 is a flow diagram of natural language processing in one embodiment;
FIG. 10 is a schematic diagram of a syntactic analysis result in one embodiment;
FIG. 11 is a flow diagram of generating an operation instruction in one embodiment;
FIG. 12 is a flow diagram of a system process in one embodiment;
FIG. 13 is a schematic diagram of a test interface of an automated test tool in one embodiment;
FIG. 14 is a block diagram of an automated test equipment in one embodiment;
Fig. 15 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an automatic test method is provided, where the method is applied to a terminal to illustrate, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step S101, displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text.
The automatic test interface is a platform interface for automatically testing the service system, and is displayed in the terminal, the service system to be tested is the service system needing to be automatically tested, and the test text information is test language text information input by a user and used for automatically testing the service system.
For example, the service system may be a hooking system, and the test text information is a language text for automatically testing the service flow of the hooking system, where the test text may be input by a user, and may be: "input card number, account number, click inquiry button to inquire, input hanging account number, click hanging button hanging relation".
The test operation text is text information used for representing the test operation in the test text, and the test object text is text of an operation object corresponding to the test operation. For example, for the test text information of "input card number, account number", the test operation text may be "input", the corresponding test object text may be "card number, account number", and similarly for the text information of "click query button" the test operation text may be "click", and the corresponding test object text may be "query button".
Specifically, when a user performs an automated test, an automated test interface for performing an automated test on a business system may be first presented in a terminal, and in the automated test interface, the user may select a business system for which an automated test is required, and text information for performing an automated test on the business system as test text information. And then the terminal can further extract at least one test operation text representing the test operation and the test object text corresponding to each test operation text according to the text information input by the user.
Step S102, a plurality of operation instruction information which is bound with the test operation text in advance is obtained, and the corresponding operation object position of the test operation object represented by the test object text in the display page of the service system is obtained.
The operation instruction information is an operation instruction bound with the test operation text in advance, and for the service system, executing a certain test operation may need to include a plurality of operation instructions, for example, for an input operation, it is required to click an input box first and then input data, in which case, if a user does not know a specific service flow of the service system, an automatic test may fail. Therefore, in this embodiment, the terminal binds corresponding operation instruction information to the test operation text in advance, that is, can bind two kinds of operation instruction information of the click instruction and the input instruction at the same time for the input operation.
The position of the operation object refers to the position of the test operation object represented by the test object text in the display page of the service system, for example, the test object text may refer to an "account number", and then the position of the corresponding test operation object in the display page of the service system may refer to the position of an account number input box for inputting the account number in the service system, where the position may be obtained in advance, and the corresponding relationship between the position and the test object text is pre-built.
Specifically, when the terminal obtains the test operation text, the terminal can acquire a plurality of operation instruction information bound with the test operation text, and meanwhile, can acquire a pre-constructed operation object position corresponding to the test object text according to the test object text.
Step S103, executing each operation instruction information at the position of the operation object in the display page of the service system, obtaining the operation result of each operation instruction information, and displaying the operation result in the automatic test interface.
After the operation instruction information corresponding to the test operation text and the operation object position corresponding to the test object text are obtained in step S102, each operation instruction information can be executed in the operation object position in the display page of the service system to obtain a corresponding operation result, and the operation result is displayed in the automation test interface to realize the automation test.
For example, for the test text information of "input account", after determining that the operation instruction corresponding to the input operation is a click instruction+an input instruction, the terminal may trigger the operation instruction at the position of the account input box for inputting the account in the service system, that is, click the position of the account input box in the service system first, and perform the account input operation at the position, so as to obtain the account input result, and then display the account input result, thereby implementing an automatic test.
In the automatic test method, the terminal obtains at least one test operation text and a test object text corresponding to the test operation text according to the test text information by displaying an automatic test interface and obtaining the test text information input in the automatic test interface aiming at the service system to be tested; acquiring a plurality of operation instruction information which is bound with a test operation text in advance, and acquiring the corresponding operation object position of a test operation object represented by the test object text in a display page of a service system; executing each operation instruction information at the position of the operation object in the display page of the service system, acquiring the operation result of each operation instruction information, and displaying the operation result in the automatic test interface. According to the method and the system, an automatic test interface is displayed through the terminal, a user only needs to input test text information in the automatic test interface, then a plurality of bound operation instructions can be determined according to test operation texts in the test text information, and operation instruction execution is carried out on corresponding operation object positions of test operation objects represented by the test object texts in a display page of a service system, so that an operation result is displayed in the automatic test interface.
In one embodiment, as shown in fig. 2, step S101 may further include:
step S201, a plurality of target text word segments corresponding to the test text information are obtained, and word segment types corresponding to the target text word segments and association relations of the target text word segments are obtained.
The target text word is a text word included in the test text information, the word type is a word type corresponding to each target text word, such as noun type, verb type, conjunctive type, punctuation mark type, and the like, and the association relationship of the target text word is an association relationship between each target text word.
Specifically, the terminal may obtain a plurality of text tokens included in the test text information through a text token algorithm, and then may further use a syntax analysis algorithm, for example, may perform syntax analysis through stanfordparer, to obtain a token type corresponding to each target text token, and an association relationship between each target text token.
Step S203, obtaining a first text word of a verb type represented by a word type in a target text word, and taking the first text word as a test operation text under the condition that the first text word belongs to a preset operation text word library; the operation text word segmentation library is pre-stored with text word segmentation corresponding to a plurality of test operations.
The first text word segmentation refers to text word segmentation with the word segmentation type being characterized as a verb type in target text word segmentation, and the operation text word segmentation library is constructed in advance and comprises word segmentation libraries of text word segmentation corresponding to various test operations. Since in the automated test process, only operations such as inputting, clicking and the like are generally required to be performed, a word segmentation library composed of text words corresponding to the operations may be pre-constructed, for example, the word segmentation library may include operations such as: "[ click, double click, right click, scroll, click on a roller, click on both left and right keys, etc.).
Specifically, after obtaining the target text word, the terminal may determine the text word with the word type representing the verb type from the target text word as the first text word, and determine whether each first text word belongs to a preset operation text word library, if so, use the first text word as a test operation text, and if not, discard the first text word.
Step S203, obtaining a second text word associated with the test operation text according to the association relation, and taking the second text word as a test object text corresponding to the test operation text; the second text word is the text word of the noun type in the target text word.
The second text word segmentation refers to target text word segmentation with word segmentation type as noun type, and after the terminal obtains the test operation text in step S202, the terminal may also find a second text word segmentation associated with each test operation text based on the association relationship between the target text word segmentation obtained in step S201, and use the second text word segmentation as a test object text corresponding to the test operation text.
In this embodiment, after obtaining the test text information, the terminal may further perform word segmentation processing on the test text information to obtain a target text word segment, and determine a word segment type of the target text word segment and an association relationship between the target text word segments, so that the word segment type is a verb type, and a target text word segment belonging to a preset operation text word bank is used as a test operation text, and meanwhile, the word segment type is a noun type, and a target text word segment having an association relationship with the test operation text is used as a test object text, so that recognition efficiency of the test operation text and the test object text may be improved.
Further, step S201 may further include: obtaining a plurality of first initial text word fragments corresponding to the test text information through a forward maximum matching algorithm, and obtaining a plurality of second initial text word fragments corresponding to the test text information through a reverse maximum matching algorithm; and obtaining the target text word from the first initial text word or the second initial text word.
The principle of the forward maximum matching algorithm is that the text of words with longest meaning is matched from the initial position of the test text information, so that the text word segmentation algorithm is obtained from the end of the matching, and the first initial text word segmentation algorithm refers to the initial text word segmentation obtained through the forward maximum matching algorithm. The principle of the reverse maximum matching algorithm is that the matching is performed in a mode opposite to that of the forward maximum matching algorithm, namely, the matching is performed from the end position of the test text information forward until the text starts. The second initial text word is the initial text word obtained by matching through a reverse maximum matching algorithm.
In this embodiment, the terminal may perform word segmentation processing on the test text information by using a forward maximum matching algorithm and a reverse maximum matching algorithm, to obtain a first initial text word segment and a second initial text word segment, respectively, and then, the terminal may select one of the first initial text word segment and the second initial text word segment as the target text word segment.
In this embodiment, the terminal may obtain the first initial text word segment and the second initial text word segment corresponding to the test text information through the forward maximum matching algorithm and the reverse maximum matching algorithm, respectively, and then may select one of the two initial text word segments as the target text word segment, and in this way, the two word segment matching algorithms may be fused to obtain the target text word segment, so as to improve accuracy of obtaining the target text word segment.
Further, as shown in fig. 3, obtaining the target text word from the first initial text word segment or the second initial text word segment may further include:
step S301, obtaining the first word segmentation matching iteration number of the first initial text word segmentation and the second word segmentation matching iteration number of the second initial text word segmentation.
The first word segmentation matching iteration number refers to the iteration number of the first initial text word segmentation obtained through matching by a forward maximum matching algorithm, and the second initial text word segmentation refers to the iteration number of the second initial text word segmentation obtained through matching by a reverse maximum matching algorithm. In this embodiment, the forward maximum matching algorithm and the reverse maximum matching algorithm find the word text with the longest meaning in an iterative manner, so that the terminal may respectively count the iteration number of the first initial text word segmentation obtained by the forward maximum matching algorithm, and count the iteration number of the second initial text word segmentation obtained by the reverse maximum matching algorithm, and respectively use the iteration number of the first word segmentation matching iteration number and the second word segmentation matching iteration number.
Step S302, under the condition that the matching iteration times of the first word segmentation and the second word segmentation are different, the initial text word segmentation with smaller matching iteration times is used as a target text word segmentation;
Step S303, under the condition that the number of the matching iteration times of the first word segmentation is the same as that of the second word segmentation, acquiring the first word segmentation number of the first initial text word segmentation and the second word segmentation number of the second initial text word segmentation.
If the number of the matching iteration times of the first word segmentation is different from the number of the matching iteration times of the second word segmentation, the terminal can select the initial text word segmentation with smaller word segmentation matching iteration times from the first initial text word segmentation and the second initial text word segmentation as the target text word segmentation. If the number of the matching iterations of the first word segment is the same as the number of the matching iterations of the second word segment, the terminal may further obtain the number of the word segments included in the first initial text word segment, i.e., the number of the first word segment, and the number of the word segments included in the second initial text word segment, i.e., the number of the second word segment.
Step S304, under the condition that the number of the first word segmentation is different from the number of the second word segmentation, the initial text word segmentation with smaller word segmentation number is used as a target text word segmentation;
in step S305, when the number of the first word segments is the same as the number of the second word segments, the second initial text word segment is used as the target text word segment.
After the first word segmentation number and the second word segmentation number are obtained, the terminal can further compare the first word segmentation number with the second word segmentation number, and if the first word segmentation number and the second word segmentation number are different from the second word segmentation number, for example, the first word segmentation number is smaller than the second word segmentation number, the terminal can segment the initial text with smaller word segmentation number, namely, the first initial text segment is used as the target text segment. If the number of the first word segmentation is the same as the number of the second word segmentation, the terminal defaults to segment the second initial text as the target text word segmentation.
In this embodiment, after the terminal obtains the first initial text word segmentation and the second initial text word segmentation, the target text word segmentation may be determined based on the iteration number and the word segmentation number obtained by the initial text word segmentation, so that the accuracy of obtaining the target text word segmentation may be further improved.
In one embodiment, as shown in fig. 4, before step S102, the method may further include:
step S401, obtaining text content corresponding to a text image contained in a display page of a service system and a text position of the text image in the display page.
The text image refers to an image of text contained in a presentation page of the service system, the image can be obtained by performing image recognition on the presentation page, the text content refers to content containing characters in the text image, and the text position refers to a position of the text image in the presentation page of the service system, and the position can be characterized in terms of coordinates.
Specifically, the terminal may perform image recognition on a display page of the service system in advance, find a text image included in the display page, and then perform text recognition on the text image to obtain text content included in the text image, and determine a position coordinate of the text image in the display page of the service system.
Step S402, an operation object area contained in a display page of the service system and an area position of the operation object area in the display page are obtained.
The region position refers to the position of the operation object region in the display page, and similar to the text position, the position can be characterized by a form of position coordinates. For the service system, the operation object area may generally be composed of a selection frame, a button, an input frame, an output frame and the like, and the above areas are generally rectangular areas, so that the terminal may identify, through a rectangular area identification algorithm, the rectangular areas included in the display page of the service system, and use the rectangular areas as the operation object areas, and may also identify the position of each rectangular area in the display page of the service system, as the area position of the operation object area in the display page.
Step S403, according to the text position and the area position, constructing the corresponding relation between the text content and the area position.
Finally, after the text position of the text image is obtained in step S401 and the region position of the operation object region is obtained in step S402, the text content of the corresponding text image and the correspondence between the region positions may also be constructed based on the relationship between the text position and the region position.
Step S102 may further include: and acquiring a target area position corresponding to the text content of the test object text according to the corresponding relation between the text content and the area position, and taking the target area position as an operation object position.
After the construction of the corresponding relation between the text content and the region position is completed, the terminal can determine the region position corresponding to the text content based on the corresponding relation after obtaining the text content of the test object text, and the region position corresponding to the text content is used as a target region position, and meanwhile, the target region position is used as a test operation object represented by the test object text and corresponds to the operation object position in a display page of a service system.
For example, the correspondence relationship between the account number-position a, the card number-position B and the query button-position C is pre-constructed, and after the test object text is obtained, the terminal sets the corresponding operation object position to be the position a if the test object text is "account number", and sets the corresponding operation object position to be the position B if the test object text is "card number".
In this embodiment, the terminal may pre-identify the text content of the text image of the service system and the region position of the text image, and combine the identified region position of the operation object region to construct a correspondence between the text content and the region position, so that the pre-construction of the correspondence between the text content and the region position may be implemented, thereby improving the efficiency of acquiring the operation object position.
Further, as shown in fig. 5, step S403 may further include:
step S501, obtaining the current text content, the current text center position corresponding to the current text content, and the region center position of each operation object region in the display page.
The current text content can be any one of text content contained in a display page of the service system identified by the terminal, the current text center position is the text content center position coordinate of the current text content, and the region center position refers to the region center position coordinate of each operation object region in the display page. In this embodiment, after obtaining each text content, the terminal may select one of the text content as the current text content, and may further obtain a center position of the current text content, that is, the current text center position, according to a text position corresponding to the current text content, and determine a corresponding region center position according to a region position of each operation object region in a display page of the service system.
Step S502, determining the position difference between the current text center position and the center position of each region, and taking the operation object region with the smallest position difference as the current operation object region corresponding to the current text content.
The position difference refers to the coordinate difference between the current text center position and the center position of each region, in general, the text content is relatively close to the corresponding operation object region, for example, the text content of "account" in the display page is relatively close to the input box for inputting account, so the terminal can screen out the operation object region based on the position difference, i.e. the operation object region with the smallest position difference at the current text center position is set as the operation object region corresponding to the current text content, i.e. the current operation object region.
Step S503, constructing a corresponding relationship between the current text content and the region position of the current operation target region.
And finally, after the terminal determines the current operation object area corresponding to the current text content, the corresponding relation between the current text content and the area position of the current operation object area can be constructed, and only the corresponding relation between each text content and the area position of the operation object area can be obtained by repeating the process.
In this embodiment, the terminal may identify the operation object region corresponding to each text content based on the text center position of each text content and the region center position difference value of each operation object region in the display page, so as to construct a correspondence between each text content and the region position of the operation object region, thereby further improving the efficiency of constructing the correspondence.
In addition, after step S502, the method may further include: combining the current text content with the preset text content to form combined text content under the condition that the position difference value of the current text content and the current operation object area is smaller than a preset difference value threshold value; and constructing a corresponding relation between the combined text content and the region position of the current operation object region.
If the position difference between the current text center position and the region center position of the current operation object region is smaller, for example, if the position difference is smaller than a preset difference threshold, it indicates that the current text content is likely to be located in the current operation object region, and this is generally the case that a button included in the presentation page will only appear. For example, for a query button, the text center coordinate of the text content "query" is closer to the center coordinate of the button, so if the position difference between the current text center position and the center position of the current operation object region is smaller, the current text content can be combined with a preset text content to obtain combined text content, and the correspondence relationship between the combined text content and the region position of the current operation object region can be constructed. For example, the preset text content may be a "button", and then the terminal may combine the current text content "query" with the preset text content "button" to form a combined text content "query button", and construct a correspondence between the "query button" and the region position of the current operation object region.
In this embodiment, if the position difference between the current text center position and the region center position of the current operation object region is smaller, the terminal may further combine the current text content with the preset text content to obtain a combined text content, so as to construct a corresponding relationship between the combined text content and the region position of the current operation object region, and through the above process, the corresponding relationship between the pre-constructed text content and the region position may be more accurate.
In one embodiment, as shown in fig. 6, step S401 may further include:
step S601, carrying out contour recognition on page images of the display page, acquiring text images contained in the page images according to contour recognition results, and acquiring text positions of the text images in the display page.
In this embodiment, the text image may be obtained by performing contour recognition on the page image, where the process may be to convert the page image of the display page into a gray-scale image, binarize the gray-scale image, then perform expansion and corrosion processing on the gray-scale image, and then perform expansion again, and then obtain the text image included in the page image and the position coordinates of the text image in the page image by using a contour recognition algorithm, for example, a findcontour contour recognition algorithm provided by opencv.
Step S602, inputting each text image into a pre-trained text recognition model, extracting text characteristics of the text image and text sequence of each text image through the text recognition model;
step S603, obtaining text content corresponding to the text image based on the text features and the text sequence.
The text recognition model is a neural network model trained in advance and used for recognizing text contents contained in the text image, the text features can refer to image features extracted from the text image, and the text sequence refers to the image sequence of the text image. In this embodiment, the neural network model for identifying text content, that is, the text identification model may be a CRNN model, the input of the model may be a feature vector sequence of a text image, text features of the text image may be extracted by the model to obtain a corresponding feature map, and then Bi-LSTM may be used to extract front-back order related information of the text, that is, the text feature vector sequence may be formed by using the information and the feature map, so as to obtain text content corresponding to the text image.
In this embodiment, the terminal may perform contour recognition on the page image of the display page to obtain the text image included therein and the position of the text image on the display page, and then may further obtain the text content of the text image by using the text recognition model.
In one embodiment, the test operation text includes an input operation text, and the plurality of operation instruction information bound to the test operation text includes: clicking instruction information and inputting the instruction information; step S103 may further include: executing a click command at the position of an operation object in the display page, and acquiring data to be input according to a data table bound with the position of the operation object; the data to be input is input to the operation object position.
In this embodiment, the test operation text may be a text representing the input operation, and the text may be pre-bound with click command information and input command information, and since the input operation generally needs to be performed after the click operation is performed on the input area, the pre-bound operation command information of the input operation text may include the click command information and the input command information.
Specifically, if the test operation text obtained by the terminal is the input operation text, the multiple operation instruction information bound by the terminal may include click instruction information and input instruction information, then the click instruction may be executed at the position of the operation object in the presentation page, and then the data to be input, that is, the data to be input, may be obtained from the data table bound with the position of the object, so that the input operation is executed to input the data to the position of the operation object.
For example, a certain test text message may be a text message of "input account number", the corresponding test operation text is "input", the test operation text belongs to the input operation text, and the test object text is "account number", and the corresponding operation object position may be the position of the account number input box. Therefore, for the execution of the input operation, the click operation may be triggered at the position of the account input box, and one of the account information may be obtained from the data table bound with the account input box, for example, the account data table storing a plurality of account information, as the data to be input, so that the account information may be input in the account input box to implement the account input.
In this embodiment, if the test operation text is an input operation text, the terminal may execute the click command at the position of the operation object in the display page, and after obtaining the data to be input from the data table bound to the position of the operation object, may execute the input operation on the data to be input, and input the data to the position of the operation object, thereby implementing data input and further improving the data input efficiency.
In addition, the automatic test interface may further display object description information of each test operation object included in the display page of the service system, and before obtaining the data to be input according to the data table bound with the position of the operation object, the method may further include: responding to a data table binding operation triggered by describing information aiming at a target object in an automatic test interface, and acquiring a target data table corresponding to the data table binding operation; and binding the operation object position of the test operation object described by the target object description information with the target data table.
The object description information is text description information for describing each test operation object, for example, the object name information of the test operation object, such as an account number, a card number, and the like, the information may be bound with a service system to be tested in advance, after a user selects the service system to be tested, the corresponding object description information may be displayed in an automated test interface, or may be obtained by identifying a display page of the service system, and the object description information of the test operation object contained in the display page is identified by image identification.
The data table binding operation is an operation for binding the test operation object with the corresponding data table, and because the data table corresponding to the operation object position needs to be queried when the data is input to the operation object position of the test operation object, the user needs to construct the corresponding relation between the operation object positions in advance, and the construction of the corresponding relation can be realized by the user through the displayed automatic test interface. Specifically, a user can trigger the binding operation of the data table according to the displayed object description information, namely the target object description information, which needs to be bound to the data table in the automatic test interface, and select the data table, namely the target data table, which needs to be bound, and the terminal can construct the binding relationship between the operation object position of the test operation object described by the description information and the target data table.
For example, in the automatic test interface, object description information such as "account number", "card number" and the like may be included to respectively represent the account number and the card number, if a user needs to bind a data table for account number input, a data table binding operation may be triggered on the "account number" text displayed in the displayed automatic test interface, where the data table bound by the binding operation may be an account number table, so that the terminal may construct an account number table, and an operation object position corresponding to the account number, that is, a binding operation of an account number input box.
In this embodiment, the terminal may further display object description information of the test operation object in the automatic test interface, so that a user may directly implement binding between the position of the operation object of the test operation object and the target data table in the automatic test interface, thereby reducing operation steps of binding the data table and improving binding efficiency of the data table.
In one embodiment, an automated testing tool based on image recognition and natural language processing is further provided, characters are recognized by using a cross-sectional image morphological operation algorithm and a CRNN character recognition algorithm, and regions such as input, output and the like in an image are recognized by using a contour detection algorithm; generating computer code instructions using natural language processing; the steps of presetting sql data query conditions and the like by a user realize an automatic testing tool, so that the threshold of the whole-flow test can be greatly reduced, the time for preparing a large amount of data and knowing the functional business relationship is saved, and the testing efficiency of a testing task is improved. As shown in fig. 7, the method specifically includes the following steps:
Step 1: image recognition
The image recognition process can be as shown in fig. 8, and the image recognition comprises a text position detection step, a text recognition step and a graph detection step, and a text and graph mapping step is established, wherein the text recognition step comprises three processes of feature extraction, acquisition of front and rear text information and transcription recognition.
Step 1-1: because the gate time of the service system is relatively simple and basically consists of characters, an input box, an output box and a check box, the gate time of the service system does not have excessive interference information, belongs to character recognition in a simple scene, and can be processed by adopting image morphological operation in computer vision. The specific process is as follows:
(1) And converting the interface image into a gray level image to obtain the gray level value of each pixel point.
(2) And binarizing the gray level map to obtain a binary map, namely, only pixel points with pixels of 0 and 255 exist in the map.
(3) Expansion allows adjacent regions to communicate, making the profile more prominent.
(4) Corrosion, detail removal and more highlighting of contour information.
(5) Again expanding, making the profile more pronounced.
(6) And obtaining a character region coordinate value by adopting a findcontour contour recognition algorithm provided by opencv, and storing the coordinate.
(7) And according to the image coordinates, clipping and storing the image, and using coordinate naming, wherein the step aims at obtaining the relation between the clipped image and the coordinates in the original image.
Step 1-2: the core steps of text recognition can be as follows:
step 1-2-1: as shown in table 1, feature extraction uses a CNN network structure, containing 7 convolutional layers and 4 pooled layers. The standard CNN model has a plurality of convolution layers, pooling layers, and fully connected layers, the purpose of convolution and pooling being to extract features of the original image, and the purpose of fully connected layers being to extract correlations between the features for segmentation or classification, whereas in CRNN the purpose of CNN is to extract features, and does not require segmentation or classification, so that fully connected layers do not have to be employed. And extracting the characteristics of the image, reducing the size and increasing the channels. The input height is 32, the width is a fixed value, 160 is selected, and the height of the image is halved four times and the width is halved twice because the 2×2 of the last two layers of pooling is changed into 1×2, and the input and output sizes (channel, height, width) are respectively (1, 32, 160), (512,1, 40), namely 512 feature images with the height of 1 and the width of 40 are obtained after convolution. The input image is a fixed value and has a height of 32, so that the cut picture needs to be stretched in advance.
TABLE 1CRNN model network parameter table
Figure BDA0004139982910000151
Figure BDA0004139982910000161
Where K, s, p represent the convolution kernel size, step size, and padding size, respectively.
Step 1-2-2: bi-LSTM is adopted for acquiring the related information before and after the text. The input of the RNN cannot directly adopt the feature map obtained by the CNN, certain adjustment is needed, a feature vector sequence is needed to be extracted from the feature map generated by the CNN, each feature vector is obtained on the feature map according to the row by the row, namely each row contains 512-dimensional features, and the ith feature vector is the connection of the pixels of the ith row of the feature map, so that a sequence is formed. These sequences of feature vectors can be used as inputs to the RNN, each feature vector being used as an input to the RNN at the first time step.
Because RNNs have the problem of gradient disappearance and cannot acquire more context information, CRNN uses LSTM, whose special design allows it to capture long-range dependencies, LSTM is unidirectional, it uses only past information, but i need to pay attention to both forward and backward information when recognizing text, and thus uses bi-directional LSTM to acquire front and back text information. The input feature vector can be understood as a small region in the original image, the RNN aims at predicting which character the rectangular region is, i.e. according to the input feature vector, the prediction is performed to obtain the softmax probability distribution of all characters, which is a vector with the length of the character class number, as the input of CTC, because each time step has an input feature vector x T Outputting probability of all charactersDistribution Y T Therefore, a posterior probability matrix composed of 40 vectors of the character class number is output. This posterior probability matrix is then input to the transcription layer.
Step 1-2-3: the layer is a transcription layer, and the prediction generated by RNN is converted into a tag sequence, so that the tag sequence is obtained and then decoded to obtain a recognition result. The training data set adopts a Caffe-ocr Chinese composite data set, 360 ten thousand pictures are taken as a whole, the data is randomly generated by utilizing a Chinese corpus (news + dialect) through changes of fonts, sizes, gray scales, blurring, perspective, stretching and the like, and the dictionary contains 5990 characters (corpus word frequency statistics, full-angle half-angle combination) of Chinese characters, punctuations, english and numbers. Each sample holds 10 characters, which are randomly truncated from sentences in the corpus. The training set and the verification set are divided according to the ratio of 9:1, and the test set is about 6 ten thousand sheets
After recognition of the interface text and text coordinates on the counter, an association is established, the data format is { "text": [ coordinates ] }, for example { "account": [23,67] }, the coordinates are horizontal offset and vertical offset to the upper left corner of the interface image.
Step 1-3: because the selection frame, the buttons, the input frame, the output frame and the like exist in the image, the areas are conveniently represented by rectangles in the display interface of the service system, and therefore, the rectangles can be obtained only by detecting the rectangles in the interface image, and the detection rectangles and the text detection adopt similar operations:
(1) And firstly, median filtering denoising.
(2) And sequentially extracting different color channel detection rectangles.
(3) Each channel is binarized.
(4) And searching the outline by using a final algorithm to obtain the coordinates of the rectangle, wherein the coordinates are the coordinates of four corners of the rectangle.
Step 1-4: after the rectangular coordinates are obtained, the textual description needs to be associated with the rectangular region. The rules for establishing the association are as follows:
(1) The center coordinates of each rectangle are calculated first, then the center coordinates of each text description are calculated, the two coordinates take the difference, and when the absolute value of the difference is smaller than the threshold (the threshold can be modified, and the threshold is set to 3), the text description is understood to be in the rectangle, and only the button can be used for this, so that the text description can be added with the button, and then the relation with the coordinates of the rectangle can be established, for example, [ "query button", [45,78] ] indicates that the query button is at the position of the interface horizontal offset 45 and the vertical offset 78.
(2) Because all input and output frames of the interface are positioned at the left side of the description field of the area, the difference between the rectangular center coordinate and the character center coordinate can be taken, the difference is positive and negative, the absolute value of the difference is used for judging whether the area is a button area, the part is required to be larger than the threshold value, the area with the smallest difference is taken as the area corresponding to the description field, for example [ "account number", [12,22] ] indicates that the center coordinate of the input area corresponding to the account number is the horizontal offset 12 and the position of the vertical offset 22.
Step 2: natural language processing
As shown in fig. 9, the natural language processing module includes a word segmentation step, a syntax analysis step, and a tree structure generation step.
Step 2-1: for the original natural language, word segmentation is needed, and the purpose of word segmentation is to segment each word in the natural language description for subsequent processing, so that the meaning of the language description can be understood conveniently. The word segmentation is carried out by adopting an open-source Chinese word library THUOCL, and common segmentation algorithms are divided into a forward longest matching algorithm, a reverse longest matching algorithm and a bidirectional longest matching algorithm. The principle of forward longest match is: matching is performed from the beginning of the sentence, matching the longest meaning word, and matching is performed until the end of the sentence. Reverse longest match is opposite, starting with the end position of the sentence and matching forward. The bi-directional longest match belongs to a compromise of the forward longest match and the reverse longest match. Belongs to a complex rule set fusing two matching algorithms, and the flow is as follows: executing forward matching and reverse longest matching at the same time, and returning the one with fewer times if the times of the forward matching and the reverse longest matching are different; if the number of times of occurrence is the same, returning the word with less words in the two, and when the number of words is the same, returning the result of the inverse longest matching preferentially. In this embodiment, a bi-directional longest matching algorithm is used to segment sentences.
Step 2-2: after the word segmentation result is obtained, since sentence meaning needs to be understood, syntactic analysis is required, and the aim is to identify subject, predicate, object and the like in the sentence, and the relation between the individual words is identified. The open-source stanfordparer can perform syntactic analysis, for the result after word segmentation, a syntactic analysis algorithm is called, the input is input into a card number and an account number, a query button is clicked to perform query, a hanging account number is input, a hanging button hanging relation is clicked, the syntactic analysis result marks words in sentences and association relations respectively, VV, NN, CC, PU respectively represents verbs, common nouns, conjunctions and punctuations, and the syntactic analysis result can be shown in fig. 10.
Step 2-3: and after the syntactic analysis result is obtained, a tree-shaped storage structure is built according to the parts of speech and the relation of each word. As an example used in step 2-2, generating a tree structure is followed by the following structure
[ { VV: "input", NN: [ "card number", "account" ] },
{ VV: "click", NN: [ "query" ] },
{ VV: "input", NN: [ "hanging account" ] },
{ VV: "click", NN: [ "attach" ] }
]
Because the system only needs to execute operations such as inputting, clicking and the like when performing automatic operation, the embodiment establishes an operation instruction name word library in advance, and the vocabulary which is not in the word library is not added into the tree structure. The operation instructions are mainly divided into mouse operation and keyboard operation, and the established part of instruction name word library is exemplified as follows: [ "click", "double click", "right click", "scroll", "click roller", "click left and right keys simultaneously" ].
Step 3: generating operation instructions
As shown in fig. 11, the generating operation instruction module includes a step of converting a tree structure into a natural language instruction, a step of preprocessing the natural language instruction, and a step of generating a code instruction by natural language.
Step 3-1: before converting into computer code instruction, it needs to convert the tree structure into natural language instruction sequence, the conversion is performed by traversing, the traversing result is [ [ "input", "account" ], [ "input", "card number" ], [ "click", "query" ], [ "input", "hanging account" ], [ "click", "hanging" ], and a plurality of groups are used to store the first processed instruction sequence. Since a certain area needs to be clicked before inputting and selecting, before a command is regenerated, a click command is preset in advance for non-click operation to be clicked, for example, [ "input", "account" ] can be further processed into [ "click", "account area" ], [ "input", "account" ].
Step 3-2: after the first instruction conversion is performed on the natural language, the conversion is further needed, for clicking the corresponding instruction, because we have established a connection between the recognized text and an input box, a button, a selection box and the like during text recognition, the natural language instruction can be converted into a computer code instruction in the step, such as [ "clicking", "inquiring button" ] is converted into [ "click", [23,45] ], the click is a clicking command, the array is the interface coordinate corresponding to the inquiring button, and the array can be used as a parameter to be transmitted into a clicking script with a written number in advance, so that corresponding operation can be performed.
Step 3-3: for an instruction of input pair, because the input data needs to make corresponding selection, the automatic testing tool developed herein provides an sql command writing area, firstly, sql sentences are written for part of data needed by the service, and a mapping relation is established, when the data is converted into a computer code instruction, the corresponding sql sentences are only needed to be executed according to the mapping relation to search the corresponding data, if the service executed for the first time is met and the sql condition is not preset, the tester needs to write and inquire the sql condition manually, the sql command writing area can be selected to be saved, and the sql command writing area can be saved in the preset sql mapping after the sql command writing area so as to be convenient to use and repeatedly call and use subsequently. And the table records the used database name, address, test task and sql statement corresponding to the data under the test task. The data, table names and field names used herein are desensitized, and the data in actual use is not shown.
Step 4: system processing
For the tool described in this embodiment, the implementation mainly uses executing the corresponding mouse and keyboard events according to the generated instruction, and for more convenient use by the user, a simple visual interface is written, and a test entry is provided. As shown in fig. 12, the system processing module includes a file selection step, an image recognition step, a task selection step, a test gist input step, an sql writing step, and an execution result display step.
Step 4-1: the tool adopts a month version, new interface content can be identified once in each month, an exe file can be manually added for identification in each test, and if not, the interface is identified by default by using the system of the month. The result of recognizing the text is displayed in the interface recognition result.
Step 4-2: the image recognition calls image recognition, and the result of recognizing the characters is displayed in the interface recognition result.
Step 4-3: the task to be tested is selected, and the subsequent input sql is also stored in the sql query condition database corresponding to the task.
Step 4-4: inputting a test key point and calling natural language processing.
Step 4-5: the recognition result is fed back to the interface recognition result, the recognized characters are fed back to be segmented by a line feed character, if the type task is not preset with the sql query condition, the input of the sql statement is performed after the input of the sql statement after the addition of the sql query condition in the output domain, as shown in fig. 13, if the input of the sql statement is not performed manually, the default preset sql is used or the error is reported, and the test result can be obtained after the submission.
Step 4-6: the test result feedback is in the execution result.
By the embodiment, the threshold of the whole flow test can be reduced, the time for preparing a large amount of data and knowing the functional business relationship is saved, and the test efficiency of the test task is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an automatic testing device for realizing the above-mentioned related automatic testing method. The implementation of the solution provided by the device is similar to that described in the above method, so specific limitations in one or more embodiments of the automated testing apparatus provided below may be referred to above as limitations of the automated testing method, and will not be described herein.
In one embodiment, as shown in FIG. 14, an automated test apparatus is provided comprising: a test text input module 1401, an operation information acquisition module 1402, and an operation result display module 1403, wherein:
the test text input module 1401 is configured to display an automatic test interface, obtain test text information input in the automatic test interface for a service system to be tested, obtain at least one test operation text according to the test text information, and obtain a test object text corresponding to the test operation text;
an operation information obtaining module 1402, configured to obtain a plurality of operation instruction information that are bound with the test operation text in advance, and obtain a position of an operation object corresponding to the test operation object represented by the test object text in a display page of the service system;
the operation result display module 1403 is configured to execute each operation instruction information at an operation object position in a display page of the service system, obtain an operation result of each operation instruction information, and display the operation result in an automation test interface.
In one embodiment, the test text input module 1401 is further configured to take a plurality of target text tokens corresponding to the test text information, and obtain a token type corresponding to each target text token, and an association relationship of each target text token; acquiring a first text word of a verb type represented by a word type in a target text word, and taking the first text word as a test operation text under the condition that the first text word belongs to a preset operation text word library; the operation text word segmentation library is pre-stored with text word segmentation corresponding to a plurality of test operations; acquiring a second text word associated with the test operation text according to the association relation, and taking the second text word as a test object text corresponding to the test operation text; the second text word is the text word of the noun type in the target text word.
In one embodiment, the test text input module 1401 is further configured to obtain a plurality of first initial text tokens corresponding to the test text information through a forward maximum matching algorithm, and obtain a plurality of second initial text tokens corresponding to the test text information through a reverse maximum matching algorithm; and obtaining the target text word from the first initial text word or the second initial text word.
In one embodiment, the test text input module 1401 is further configured to obtain a first word-segmentation matching iteration number of the first initial text word segment and a second word-segmentation matching iteration number of the second initial text word segment; under the condition that the matching iteration times of the first word segmentation and the second word segmentation are different, the initial text word segmentation with smaller matching iteration times is used as a target text word segmentation; under the condition that the matching iteration times of the first word segmentation and the second word segmentation are the same, acquiring the first word segmentation number of the first initial text word segmentation and the second word segmentation number of the second initial text word segmentation; under the condition that the number of the first word segmentation is different from the number of the second word segmentation, the initial text word segmentation with smaller word segmentation number is used as a target text word segmentation; and under the condition that the number of the first word segmentation is the same as the number of the second word segmentation, the second initial text word segmentation is used as the target text word segmentation.
In one embodiment, the automated test equipment further comprises: the display page identification module is used for acquiring text content corresponding to a text image contained in a display page of the service system and the text position of the text image in the display page; acquiring an operation object area contained in a display page of a service system and the area position of the operation object area in the display page; according to the text position and the region position, constructing a corresponding relation between text content and the region position; the operation information obtaining module 1402 is further configured to obtain a target area position corresponding to the text content of the test object text according to the correspondence between the text content and the area position, and take the target area position as the operation object position.
In one embodiment, the display page identification module is further configured to obtain the current text content, a current text center position corresponding to the current text content, and a region center position of each operation object region in the display page; determining the position difference between the current text center position and the center position of each region, and taking the operation object region with the smallest position difference as the current operation object region corresponding to the current text content; and constructing a corresponding relation between the current text content and the region position of the current operation object region.
In one embodiment, the display page identification module is further configured to combine the current text content with the preset text content to form a combined text content when a position difference between the current text content and the current operation object area is less than a preset difference threshold; and constructing a corresponding relation between the combined text content and the region position of the current operation object region.
In one embodiment, the display page identification module is further used for carrying out contour identification on a page image of the display page, acquiring text images contained in the page image according to a contour identification result, and acquiring text positions of the text images in the display page; inputting each text image into a pre-trained text recognition model, and extracting text characteristics of the text images and text sequences of the text images through the text recognition model; and obtaining text content corresponding to the text image based on the text characteristics and the text sequence.
In one embodiment, the test operation text includes an input operation text, and the plurality of operation instruction information bound to the test operation text includes: clicking instruction information and inputting the instruction information; the operation result display module 1403 is further configured to execute a click command at an operation object position in the display page, and obtain data to be input according to a data table bound to the operation object position; the data to be input is input to the operation object position.
In one embodiment, the automated test interface further displays object description information of each test operation object included in a display page of the service system; the operation result display module 1403 is further configured to obtain a target data table corresponding to the data table binding operation in response to the data table binding operation triggered by the description information of the target object in the automation test interface; and binding the operation object position of the test operation object described by the target object description information with the target data table.
The various modules in the automated test equipment described above may be implemented in whole or in part in software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and an internal structure diagram thereof may be as shown in fig. 15. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements an automated test method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (14)

1. An automated testing method, the method comprising:
displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
Acquiring a plurality of operation instruction information which is bound with the test operation text in advance, and acquiring the corresponding operation object position of the test operation object represented by the test object text in a display page of the service system;
executing each piece of operation instruction information at the operation object position in the display page of the service system, acquiring an operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
2. The method according to claim 1, wherein the obtaining at least one test operation text according to the test text information, and the test object text corresponding to the test operation text, includes:
acquiring a plurality of target text word segments corresponding to the test text information, and acquiring word segment types corresponding to the target text word segments and association relations of the target text word segments;
acquiring a first text word of a verb type represented by a word type in the target text word, and taking the first text word as the test operation text under the condition that the first text word belongs to a preset operation text word library; the operation text word segmentation library is pre-stored with text word segmentation corresponding to a plurality of test operations;
Acquiring a second text word associated with the test operation text according to the association relation, and taking the second text word as a test object text corresponding to the test operation text; the second text word is the text word of the noun type represented by the word type in the target text word.
3. The method according to claim 2, wherein the obtaining a plurality of target text tokens corresponding to the test text information includes:
acquiring a plurality of first initial text fragments corresponding to the test text information through a forward maximum matching algorithm, and acquiring a plurality of second initial text fragments corresponding to the test text information through a reverse maximum matching algorithm;
and acquiring the target text word from the first initial text word or the second initial text word.
4. The method of claim 3, wherein the obtaining the target text word from the plurality of first initial text words or the plurality of second initial text words comprises:
acquiring the first word segmentation matching iteration times of the first initial text word segmentation and the second word segmentation matching iteration times of the second initial text word segmentation;
Under the condition that the matching iteration times of the first word segmentation and the second word segmentation are different, using the initial text word segmentation with smaller word segmentation matching iteration times as the target text word segmentation;
under the condition that the matching iteration times of the first word segmentation and the second word segmentation are the same, acquiring the first word segmentation number of the first initial text word segmentation and the second word segmentation number of the second initial text word segmentation;
under the condition that the number of the first word segmentation is different from the number of the second word segmentation, the initial text word segmentation with smaller word segmentation number is used as the target text word segmentation;
and under the condition that the number of the first word segmentation is the same as the number of the second word segmentation, the second initial text word segmentation is used as the target text word segmentation.
5. The method according to claim 1, wherein the obtaining the test operation object characterized by the test object text further comprises, before the corresponding operation object position in the presentation page of the business system:
acquiring text content corresponding to a text image contained in a display page of the service system and a text position of the text image in the display page;
Acquiring an operation object area contained in a display page of the service system and an area position of the operation object area in the display page;
constructing a corresponding relation between the text content and the region position according to the text position and the region position;
the step of obtaining the corresponding operation object position of the test operation object represented by the test object text in the display page of the service system comprises the following steps:
and acquiring a target area position corresponding to the text content of the test object text according to the corresponding relation between the text content and the area position, and taking the target area position as the operation object position.
6. The method of claim 5, wherein constructing the correspondence between the text content and the region location based on the text location and the region location comprises:
acquiring current text content, a current text center position corresponding to the current text content and a region center position of each operation object region in the display page;
determining the position difference between the current text center position and each region center position, and taking the operation object region with the smallest position difference as the current operation object region corresponding to the current text content;
And constructing the corresponding relation between the current text content and the region position of the current operation object region.
7. The method according to claim 6, wherein the operation object area with the smallest position difference value is used as the current operation object area corresponding to the current text content, and further comprising:
combining the current text content with preset text content to form combined text content under the condition that the position difference value of the current text content and the current operation object area is smaller than a preset difference value threshold value;
and constructing the corresponding relation between the combined text content and the region position of the current operation object region.
8. The method according to claim 5, wherein the obtaining text content corresponding to a text image included in a presentation page of the service system and a text position of the text image in the presentation page includes:
performing contour recognition on the page image of the display page, acquiring text images contained in the page image according to a contour recognition result, and acquiring text positions of the text images in the display page;
Inputting each text image into a pre-trained text recognition model, extracting text characteristics of the text image through the text recognition model, and extracting text sequence of each text image;
and obtaining text content corresponding to the text image based on the text characteristics and the text sequence.
9. The method of claim 1, wherein the test operation text comprises an input operation text, and the plurality of operation instruction information bound to the test operation text comprises: clicking instruction information and inputting the instruction information;
executing each piece of operation instruction information at the operation object position in the display page of the service system, wherein the operation instruction information comprises the following steps:
executing a click command at the position of the operation object in the display page, and acquiring data to be input according to a data table bound with the position of the operation object;
and inputting the data to be input to the operation object position.
10. The method according to claim 9, wherein the automated test interface further displays object description information of each test operation object included in a display page of the business system;
Before the data to be input is acquired according to the data table bound with the position of the operation object, the method further comprises:
responding to a data table binding operation triggered by describing information aiming at a target object in the automatic test interface, and acquiring a target data table corresponding to the data table binding operation;
and binding the operation object position of the test operation object described by the target object description information with the target data table.
11. An automated test equipment, the equipment comprising:
the test text input module is used for displaying an automatic test interface, acquiring test text information input in the automatic test interface for a service system to be tested, acquiring at least one test operation text according to the test text information, and acquiring a test object text corresponding to the test operation text;
the operation information acquisition module is used for acquiring a plurality of operation instruction information which is bound with the test operation text in advance and acquiring the corresponding operation object position of the test operation object represented by the test object text in the display page of the service system;
the operation result display module is used for executing each piece of operation instruction information at the operation object position in the display page of the service system, obtaining the operation result of each piece of operation instruction information, and displaying the operation result in the automatic test interface.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 10 when the computer program is executed.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 10.
CN202310286191.1A 2023-03-22 2023-03-22 Automated testing method, apparatus, computer device and storage medium Pending CN116225956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310286191.1A CN116225956A (en) 2023-03-22 2023-03-22 Automated testing method, apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310286191.1A CN116225956A (en) 2023-03-22 2023-03-22 Automated testing method, apparatus, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN116225956A true CN116225956A (en) 2023-06-06

Family

ID=86578749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310286191.1A Pending CN116225956A (en) 2023-03-22 2023-03-22 Automated testing method, apparatus, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN116225956A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310591A (en) * 2023-11-28 2023-12-29 广州思林杰科技股份有限公司 Small-size equipment for testing equipment calibration accuracy detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310591A (en) * 2023-11-28 2023-12-29 广州思林杰科技股份有限公司 Small-size equipment for testing equipment calibration accuracy detection
CN117310591B (en) * 2023-11-28 2024-03-19 广州思林杰科技股份有限公司 Small-size equipment for testing equipment calibration accuracy detection

Similar Documents

Publication Publication Date Title
US10482174B1 (en) Systems and methods for identifying form fields
RU2695489C1 (en) Identification of fields on an image using artificial intelligence
RU2661750C1 (en) Symbols recognition with the use of artificial intelligence
US11816138B2 (en) Systems and methods for parsing log files using classification and a plurality of neural networks
US20220004878A1 (en) Systems and methods for synthetic document and data generation
US10699112B1 (en) Identification of key segments in document images
US20220222292A1 (en) Method and system for ideogram character analysis
CN111488732B (en) Method, system and related equipment for detecting deformed keywords
CN113159013A (en) Paragraph identification method and device based on machine learning, computer equipment and medium
CN116225956A (en) Automated testing method, apparatus, computer device and storage medium
CN112149680A (en) Wrong word detection and identification method and device, electronic equipment and storage medium
CN112749639B (en) Model training method and device, computer equipment and storage medium
EP4060526A1 (en) Text processing method and device
CN109190615A (en) Nearly word form identification decision method, apparatus, computer equipment and storage medium
CN112613293A (en) Abstract generation method and device, electronic equipment and storage medium
CN117332766A (en) Flow chart generation method, device, computer equipment and storage medium
US20230023636A1 (en) Methods and systems for preparing unstructured data for statistical analysis using electronic characters
US20230126022A1 (en) Automatically determining table locations and table cell types
CN111414728B (en) Numerical data display method, device, computer equipment and storage medium
CN111309850B (en) Data feature extraction method and device, terminal equipment and medium
CN115114412B (en) Method for retrieving information in document, electronic device and storage medium
CN113505570B (en) Reference is made to empty checking method, device, equipment and storage medium
CN116012855A (en) Text content examination method, apparatus, computer device and storage medium
CN115880682A (en) Image text recognition method, device, equipment, medium and product
CN116884019A (en) Signature recognition method, signature recognition device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination