CN114490419B - Heterogeneous architecture cross-cloud testing method, system and computer equipment - Google Patents

Heterogeneous architecture cross-cloud testing method, system and computer equipment Download PDF

Info

Publication number
CN114490419B
CN114490419B CN202210143296.7A CN202210143296A CN114490419B CN 114490419 B CN114490419 B CN 114490419B CN 202210143296 A CN202210143296 A CN 202210143296A CN 114490419 B CN114490419 B CN 114490419B
Authority
CN
China
Prior art keywords
test
task
cloud
control center
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210143296.7A
Other languages
Chinese (zh)
Other versions
CN114490419A (en
Inventor
王威
黄井泉
尹刚
林露
喻银凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhijing Technology Co ltd
Original Assignee
Hunan Zhijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhijing Technology Co ltd filed Critical Hunan Zhijing Technology Co ltd
Priority to CN202210143296.7A priority Critical patent/CN114490419B/en
Publication of CN114490419A publication Critical patent/CN114490419A/en
Application granted granted Critical
Publication of CN114490419B publication Critical patent/CN114490419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a cross-cloud testing method, a cross-cloud testing system and computer equipment of a heterogeneous architecture. The method comprises the following steps: receiving test tasks sent by the cloud clusters, identifying cloud service information of the test tasks, and distributing the test tasks to corresponding test clouds according to preset task distribution strategy configuration; the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, and constructs a test environment to provide an external access interface; the working terminal receives a test task through the external access interface, performs program test in the test environment, runs test codes to obtain test results, transmits the test results back to the control center, and asynchronously recovers the results; and the control center displays the result according to the result display form. The method can meet the operation scheduling requirement of massive software testing tasks on the Internet.

Description

Heterogeneous architecture cross-cloud testing method, system and computer equipment
Technical Field
The application relates to the technical field of software testing, in particular to a cross-cloud testing method, a cross-cloud testing system and computer equipment of a heterogeneous architecture.
Background
As a guarantee of normal and stable running of software applications, demands for testing on the cloud are becoming stronger as the tendency of the application to cloud gradually fires. Traditional testing is to directly provide a set of physical equipment and a matched equipment environment for testers, and has the inherent defects: cannot be reused, cannot be scaled, cannot be released in time after the test is finished, and brings extra resource cost. Compared with the traditional test, the test on the cloud can be created and deployed at any time and any place, can be reused and released in time, and provides more flexibility and convenience. At the same time, because different test tasks have different test architectures, great differences exist in various aspects of operation environments, operation resource requirements, operation modes and the like, and cloud testing also faces great challenges.
For the above challenges, the existing cloud test supporting method, technology and service are still immature, and especially there is a problem that the environment resources allocated to the test user cannot be accurately matched with the test requirements, so that the test task cannot be completed efficiently. Meanwhile, the services provided by each cloud service provider are complete, the support of different technical architectures is good, but the resources, the services and the prices of each service provider are different and are respectively suitable for different software application scenes, and the advantages of each cloud service provider are not well combined with the test platform products on the market. Therefore, in the cloud test scene, how to establish effective connection between the test task and different cloud services makes full use of the characteristics of each cloud service resource, and has great practical value for accurate matching and cost control of the test task.
Disclosure of Invention
Based on the foregoing, it is necessary to provide a heterogeneous architecture cross-cloud testing method, system and computer device.
A method of cross-cloud testing of a heterostructure, the method comprising:
the control center starts a monitoring thread and monitors a cloud cluster registration request so that the cloud clusters are registered in the test cloud clusters;
receiving a test task sent by a cloud cluster, and identifying cloud service information of the test task so as to distribute the test task to a corresponding test cloud according to preset task distribution strategy configuration; the testing task is a five-tuple predefined by a control center, and elements in the five-tuple comprise: test codes, test sets, test methods, operating environments and result presentation forms;
the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input by providing an access interface to the outside;
the working terminal receives the user input of the test task through the external access interface, performs program test in the test environment, runs test codes to obtain test results, transmits the test results back to the control center, and performs asynchronous recovery on the test environment;
and the control center displays the result according to the result display form.
In one embodiment, the test code is a program code to be tested and a test program code of the same language type, and is stored in a version library; the test set is a plurality of groups of input arrays and expected output arrays; the test set is constructed without a corresponding interactive input acquisition environment; the test method is universal output comparison; the operating environment includes: mirror definition, running host architecture requirements, and resource definition at runtime; the mirror image is defined as a container mirror image with an operating environment, the operating host architecture requirement is a host architecture on which an operating task depends, and the resources in the operating process are defined as a CPU upper limit, a memory upper limit and an operating time upper limit.
In one embodiment, the method further comprises: the control center starts a monitoring thread, monitors a registration request initiated by the working terminal to the control center, and requests to join the test cloud cluster group; and the control center receives the registration request, performs authentication verification, does not pass the discarding, and adds the cloud cluster into the test cloud cluster group.
In one embodiment, the method further comprises: submitting the test task to a control center for operation according to a screening scoring strategy of the load; the load of the test cloud is collected and returned by a general monitoring collection component of a working terminal call container and a cluster, and a control center inquires the cluster load condition returned by the working terminal in real time; and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
In one embodiment, the method further comprises: the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input by providing an access interface to the outside.
In one embodiment, the method further comprises: the working terminal operates a program to be tested in a Jupyter Notebook environment to obtain the output of the program to be tested, then operates a test program code, tests to obtain a test result according to a model evaluation algorithm corresponding to the test method, stores texts and pictures generated in the whole operation sequence into a file, transmits the file back to a control center, marks a starting task container as a state to be recovered, and asynchronously recovers.
In one embodiment, the method further comprises: the working terminal operates a program to be tested in a test container environment to obtain the output of the program to be tested, then operates a test program code, tests to obtain a test result according to a model evaluation algorithm corresponding to the test method, stores texts and pictures generated in the whole operation sequence into a file, transmits the file back to a control center, marks a starting task container as a state to be recovered, and asynchronously recovers.
A heterogeneous architecture cross-cloud testing system, the system comprising:
the cloud cluster testing system comprises a control center, a testing cloud cluster and a working terminal for cloud cluster deployment;
the control center starts a monitoring thread and monitors a cloud cluster registration request so that the cloud clusters are registered in the test cloud clusters; receiving a test task sent by a cloud cluster, and identifying cloud service information of the test task so as to distribute the test task to a corresponding test cloud according to preset task distribution strategy configuration; the testing task is a five-tuple predefined by a control center, and elements in the five-tuple comprise: test codes, test sets, test methods, operating environments and result presentation forms;
the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input by providing an access interface to the outside; receiving user input of the test task through the external access interface, performing program test in the test environment, running test codes to obtain test results, transmitting the test results back to a control center, and asynchronously recovering the test environment;
and the control center displays the result according to the result display form.
According to the cross-cloud testing method, system, computer equipment and storage medium of the heterogeneous architecture, the testing task is subjected to abstract disassembly to define the characteristics, the disassembled quintuple is the identifiable schedulable index, the heterogeneous architecture can be widely compatible with different technical architectures and cloud resource architectures, the custom integration expansion of the distribution strategy is provided, different testing cloud environments are effectively connected with the testing requirements of users, the problem that the testing resource allocation of the testing platform is not matched with the testing requirements is avoided, and the operation scheduling requirements of massive software testing tasks on the Internet can be met.
Drawings
FIG. 1 is a flow diagram of a method of cross-cloud testing of a heterogeneous architecture in one embodiment;
FIG. 2 is a flow chart of a cross-cloud testing method of a heterogeneous architecture in another embodiment;
FIG. 3 is a block diagram of a heterogeneous architecture cross-cloud testing system in one embodiment;
fig. 4 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a method for testing a heterogeneous architecture by cross-cloud is provided, which includes the following steps:
step 102, the control center starts a monitoring thread to monitor the cloud cluster registration request so as to enable the cloud cluster to be registered in the test cloud cluster.
In this embodiment, the test cloud cluster includes a large number of test cloud clusters, and the working terminals deployed on the cloud clusters send registration requests, and after authentication is successful, the cloud clusters can join the test cloud clusters to perform cloud testing.
And 104, receiving test tasks sent by the cloud clusters, and identifying cloud service information of the test tasks so as to distribute the test tasks to corresponding test clouds according to preset task distribution strategy configuration.
The testing task is a five-tuple predefined by the control center, and the elements in the five-tuple comprise: the testing codes, the testing sets, the testing methods, the running environments and the result display forms are characterized by abstract disassembly of the testing tasks, the disassembled quintuples are identifiable schedulable indexes, and can be widely compatible with different technical architectures and cloud resource architectures and provide self-defined integrated expansion of distribution strategies.
And 106, pulling the test image from the private image warehouse by the working terminal, starting the task container as a test task operation carrier, pulling the test code warehouse through the private file management service interface, so as to construct a test environment, and collecting user input by the test environment through providing an access interface to the outside.
And step 108, the working terminal receives the user input of the test task through an external access interface, performs program test in a test environment, runs test codes to obtain test results, transmits the test results back to the control center, and performs asynchronous recovery on the test environment.
Step 110, the control center displays the result according to the result display form.
In the cross-cloud testing method of the heterostructure, the testing task is subjected to abstract disassembly to define the characteristics, the split five-tuple is an identifiable and schedulable index, the method can be widely compatible with different technical frameworks and cloud resource frameworks, and the method provides self-defined integrated expansion of a distribution strategy, effectively connects different testing cloud environments with testing requirements of users, avoids the problem that testing resource allocation of a testing platform is not matched with the testing requirements, and can meet the operation scheduling requirements of massive software testing tasks on the Internet.
In one embodiment, the test code is a program code to be tested and a test program code of the same language type, and is stored in a version library; the test set is a plurality of groups of text input arrays and expected output arrays; the test set is constructed without a corresponding interactive input acquisition environment; the testing method is universal output comparison; the operating environment comprises: mirror definition, running host architecture requirements, and resource definition at runtime; the mirror image is defined as a container mirror image with an operating environment, the operating host architecture requirement is a host architecture on which an operating task depends, and the resources in the operating process are defined as a CPU upper limit, a memory upper limit and an operating time upper limit.
For convenience of description, marks are used for identification, and five-tuple composition comprises:
TC: program Code1 to be tested and test program Code TestCode1 of the same language type are stored in a version library Repo 1;
TS: the Input and output are plain texts, and are a plurality of groups of text Input arrays Input1 and an expected output array ExpectedOutput1, and a corresponding interactive Input acquisition environment is not needed;
TM: the test detection method is universal output comparison;
RE: the mirror Image is defined as a container mirror Image1 of an operating environment with T1, the operating host Architecture requirement is Architecture1, the resources in the running process are defined as a CPU upper limit CpuLimit1, a memory upper limit MemLimit1 and an operating time upper limit TimeLimit1;
RP: the result display form is text comparison, and the difference between the actual running output and the expected output of the test program is displayed.
The tester completes the feature definition of T1 in the system in the form of a visual component.
In one embodiment, a control center starts a monitoring thread, monitors a registration request initiated by a working terminal to the control center, and requests to join a test cloud cluster group; and the control center receives the registration request, performs authentication verification, does not pass the discarding, and adds the cloud cluster into the test cloud cluster group.
In one embodiment, a test task is submitted to a control center for operation according to a screening scoring strategy of a load; the load of the test cloud is collected and returned by a general monitoring collection component of a working terminal call container and a cluster, and a control center inquires the cluster load condition returned by the working terminal in real time; and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
Specifically, the control center queries Cluster load conditions returned by the working terminal in real time, sorts all nodes of the test cloud Cluster, and selects a Node1 in a cloud Cluster1 with the free degree of FreeRate1 as a distribution target.
In one embodiment, the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, and pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input by providing an access interface to the outside.
Specifically, after the test task is distributed to the Node1, the working terminal deployed in the Cluster1 pulls the test Image1 from the private Image warehouse, starts the task Container1 as a test task operation carrier, pulls the user code warehouse Repo1 and a plurality of groups of predefined text test inputs Input1 through the private version library management service interface, and builds a Jupyter Notebook test environment.
In one embodiment, the working terminal operates a program to be tested in a test container environment to obtain output of the program to be tested, then operates test program codes, tests to obtain test results according to a model evaluation algorithm corresponding to a test method, stores texts and pictures generated in the whole operation sequence into a file, transmits the file back to the control center, marks a start task container as a state to be recovered, and asynchronously recovers the state.
In particular. The Cluster1 working terminal detects that the local environment is Architecture1, a system operation system1, a user test program Code TestCode1 is operated, a language starting command of the T1 under a corresponding system is called, the user test program Code TestCode1 is operated, a selected test Framework frame 1 is operated, the Framework1 is operated to-be-tested program Code1, test input is given, a package test Result1 is obtained, the package Result is transmitted back to a control center, and meanwhile, the content 1 is marked as a state to be recovered, and the recovery is performed asynchronously.
In one embodiment, after receiving the returned test result, the control center invokes and extracts the text and the picture in the actual output file to compare respectively, so as to display the result of the test and the resource consumption statistics in the test process, and render and display the picture generated by the test program.
The beneficial effects of the invention are further illustrated by the following specific examples:
step 202, the control center starts a monitoring thread to monitor a cloud cluster registration request. The working terminal is deployed to a specific test cloud, initiates registration to a control center and requests to join a test cloud cluster. The control center receives the registration request, performs authentication verification, and adds the cluster into the test cloud cluster if the authentication verification is not discarded.
Step 204, for a typical artificial intelligence class test task T2, consists of the following five tuples:
TC: program Code2 to be tested and test program Code TestCode2 are stored in a version library Repo 2;
TS: the input form is a Jupyter Notebook interactive environment, the data set is stored in the version library, the expected output is in the form of a file, the file comprises texts and pictures, and the expected output is the program output obtained by the operation in the Jupyter Notebook;
TM: the test method is a model evaluation algorithm Assess2, and indexes such as model accuracy, error rate and the like are considered;
RE: the mirror Image is defined as a container mirror Image2 of an operating environment with T2, the operating host Architecture requirement is Architecture2, the resources in the operation are defined as a GPU upper limit GPULIMIT2, a CPU upper limit CpuLimit2, a memory upper limit MemLimit2 and an operation time upper limit TimeLimit2;
RP: the result display includes text output comparison generated by each step of the program Code2 and picture display comparison generated by model training.
The tester completes the feature definition of T2 in the system in the form of a visual component.
Step 206, the tester selects the distribution strategy of the test task as the distribution strategy based on the custom label, and submits the test task to the system for operation. The Node label selected by the test task is Tags2, optionally, the Tags2 may include labels such as "whether it is a GPU Node", "whether it is mounted with SSD", and the like, and the control center randomly selects a Cluster2 and Node2 satisfying the label screening condition for distribution.
And 208, after the test tasks are distributed to the Node2, pulling the test Image2 from the private Image warehouse by the working terminal deployed in the Cluster2, starting the task Container2 as a test task operation carrier, pulling the test code warehouse back 2 through the private version library management service interface, exposing the Jupyter Notebook environment in the Container2 to an external access port, and collecting test input.
Step 210, a tester runs a program Code2 to be tested in a juyter Notebook environment to obtain an actual output2 of the program to be tested, then runs a test program Code TestCode2, tests to obtain a test Result2 according to a selected model evaluation algorithm Assess2, stores texts and pictures generated in the whole operation sequence into a file, transmits the file back to a control center, marks the content 2 as a state to be recovered, and asynchronously recovers.
And 212, after receiving the returned test Result2, the control center calls a test Result processing module to extract texts and pictures in the actual output file for comparison respectively, prompts a tester to test the Result of the test, gives out resource consumption statistics in the test process, and renders and displays pictures generated by the test program Code 2.
It should be understood that, although the steps in the flowcharts of fig. 1 and 2 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 and 2 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or phases of other steps or other steps.
In one embodiment, as shown in fig. 3, a heterogeneous architecture cross-cloud testing system is provided, comprising: control center 302, test cloud cluster group 304, and work terminals 306, wherein:
the control center 302 starts a monitoring thread to monitor a cloud cluster registration request so as to register the cloud cluster in the test cloud cluster 304;
the control center 302 receives the test tasks sent by the cloud clusters, and identifies cloud service information of the test tasks so as to distribute the test tasks to corresponding test clouds according to preset task distribution strategy configuration; the testing task is a five-tuple predefined by a control center, and elements in the five-tuple comprise: test codes, test sets, test methods, operating environments and result presentation forms;
the working terminal 306 pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, thereby constructing a test environment, and the test environment collects user input by providing an access interface to the outside;
the working terminal 306 receives the user input of the test task through the external access interface, performs program test in the test environment, runs test codes to obtain test results, transmits the test results back to a control center, and performs asynchronous recovery on the test environment;
the control center 302 performs result display according to the result display form.
In one embodiment, the test code is a program code to be tested and a test program code of the same language type, and is stored in a version library; the test set is a plurality of groups of text input arrays and expected output arrays; the test set is constructed without a corresponding interactive input acquisition environment; the test method is universal output comparison; the operating environment includes: mirror definition, running host architecture requirements, and resource definition at runtime; the mirror image is defined as a container mirror image with an operating environment, the operating host architecture requirement is a host architecture on which an operating task depends, and the resources in the operating process are defined as a CPU upper limit, a memory upper limit and an operating time upper limit.
In one embodiment, the control center 302 starts a monitoring thread, and monitors a registration request initiated by the working terminal 306 to the control center 302 to request to join the test cloud cluster group; the control center 302 receives the registration request, performs authentication verification, and adds the cloud cluster to the test cloud cluster group if the registration request is not discarded.
In one embodiment, the control center 302 submits the test tasks to the control center for operation according to a screening scoring strategy of the load; the load of the test cloud is collected and returned by a general monitoring collection component of a working terminal call container and a cluster, and a control center inquires the cluster load condition returned by the working terminal in real time; and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
In one embodiment, the working terminal 306 pulls the test image from the private image library, starts the task container as a test task running carrier, and pulls the test code library through the private file management service interface, thereby constructing the juyter Notebook environment exposing the external access port for collecting the test task.
In one embodiment, the working terminal 306 runs a program to be tested in the Jupyter notibook environment, obtains the output of the program to be tested, runs a test program code, tests to obtain a test result according to a model evaluation algorithm corresponding to the test method, stores texts and pictures generated in the whole operation sequence into a file, transmits the file back to the control center 302, marks a starting task container as a state to be recovered, and asynchronously recovers.
In one embodiment, after receiving the returned test result, the control center 302 invokes and extracts the text and the picture in the actual output file to compare respectively, so as to display the result of the test and the resource consumption statistics in the test process, and render and display the picture generated by the test program.
For specific limitations of the heterogeneous architecture cross-cloud test system, reference may be made to the above limitation of the heterogeneous architecture cross-cloud test method, and no further description is given here. The various modules in the above-described heterogeneous architecture cross-cloud test system may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as private mirror warehouse, task container and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of cross-cloud testing of a heterostructure.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the method of the above embodiments when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A method for cross-cloud testing of a heterostructure, the method comprising:
the control center starts a monitoring thread and monitors a cloud cluster registration request so that the cloud clusters are registered in the test cloud clusters;
receiving a test task sent by a cloud cluster, and identifying cloud service information of the test task so as to distribute the test task to a corresponding test cloud according to preset task distribution strategy configuration; the testing task is a five-tuple predefined by a control center, and elements in the five-tuple comprise: test codes, test sets, test methods, operating environments and result presentation forms; the test codes are to-be-tested program codes and test program codes of the same language type, and are stored in a version library; the test set is a plurality of groups of input arrays and expected output arrays; the test set is constructed without a corresponding interactive input acquisition environment; the test method is universal output comparison; the operating environment includes: mirror definition, running host architecture requirements, and resource definition at runtime; the mirror image is defined as a container mirror image provided with an operation environment, the operation host framework requirement is a host framework on which an operation task depends, and the resources in operation are defined as a CPU upper limit, a memory upper limit and an operation time upper limit;
the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input through the external access interface;
the working terminal receives user input of a test task through the external access interface, performs program test in the test environment, runs test codes to obtain test results, transmits the test results back to the control center, and performs asynchronous recovery on the test environment;
the cloud cluster working terminal detects the architecture and the system of the local environment, calls a language starting command of a test task under a corresponding system, runs a user test program code, runs a selected test framework, runs a program to be tested in the selected test framework, gives test input and obtains a packaging test result, and transmits the test result back to the control center;
and the control center displays the result according to the result display form.
2. The method of claim 1, wherein the control center turns on a listening thread to listen for a cloud cluster registration request to register the cloud cluster in the test cloud cluster, comprising:
the control center starts a monitoring thread, monitors a registration request initiated by the working terminal to the control center, and requests to join the test cloud cluster group;
and the control center receives the registration request, performs authentication verification, does not pass the discarding, and adds the cloud cluster into the test cloud cluster group.
3. The method of claim 1, wherein distributing the test tasks to the corresponding test clouds according to a preset task distribution policy configuration comprises:
submitting the test task to a control center for operation according to a screening scoring strategy of the load;
the load of the test cloud is collected and returned by a general monitoring collection component of a working terminal call container and a cluster, and a control center inquires the cluster load condition returned by the working terminal in real time;
and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
4. The method of claim 1, wherein the work terminal pulls the test image from the private image repository, initiates the task container as a test task execution carrier, and pulls the test code repository through the private file management service interface to construct a test environment, the test environment collecting user input by providing an access interface to the outside, comprising:
the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input by providing an access interface to the outside.
5. The method of claim 4, wherein the work terminal receives the test task user input through the external access interface and performs a program test in the test environment, and wherein running test code obtains test results and returns the test results to the control center and asynchronously recovers the test environment, comprising:
the working terminal operates a program to be tested in a test container environment to obtain the output of the program to be tested, then operates a test program code, tests to obtain a test result according to a model evaluation algorithm corresponding to the test method, stores texts and pictures generated in the whole operation sequence into a file, transmits the file back to a control center, marks a starting task container as a state to be recovered, and asynchronously recovers.
6. The method according to claims 1 to 5, wherein the control center performs the result presentation according to the result presentation form, comprising:
after receiving the returned test result, the control center calls and extracts texts and pictures in the actual output file to be compared respectively so as to display the result of the test and the resource consumption statistics in the test process, and renders and displays the pictures generated by the test program.
7. A heterogeneous architecture cross-cloud testing system, the system comprising: the cloud cluster testing system comprises a control center, a testing cloud cluster and a working terminal for cloud cluster deployment;
the control center starts a monitoring thread and monitors a cloud cluster registration request so that the cloud clusters are registered in the test cloud clusters; receiving a test task sent by a cloud cluster, and identifying cloud service information of the test task so as to distribute the test task to a corresponding test cloud according to preset task distribution strategy configuration; the testing task is a five-tuple predefined by a control center, and elements in the five-tuple comprise: test codes, test sets, test methods, operating environments and result presentation forms; the test codes are to-be-tested program codes and test program codes of the same language type, and are stored in a version library; the test set is a plurality of groups of input arrays and expected output arrays; the test set is constructed without a corresponding interactive input acquisition environment; the test method is universal output comparison; the operating environment includes: mirror definition, running host architecture requirements, and resource definition at runtime; the mirror image is defined as a container mirror image provided with an operation environment, the operation host framework requirement is a host framework on which an operation task depends, and the resources in operation are defined as a CPU upper limit, a memory upper limit and an operation time upper limit;
the working terminal pulls the test image from the private image warehouse, starts the task container as a test task operation carrier, pulls the test code warehouse through the private file management service interface, so as to construct a test environment, and the test environment collects user input through the external access interface; receiving user input of a test task through the external access interface, performing program test in the test environment, running test codes to obtain test results, transmitting the test results back to a control center, and asynchronously recovering the test environment;
the cloud cluster working terminal detects the architecture and the system of the local environment, calls a language starting command of a test task under a corresponding system, runs a user test program code, runs a selected test framework, runs a program to be tested in the selected test framework, gives test input and obtains a packaging test result, and transmits the test result back to the control center;
and the control center displays the result according to the result display form.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202210143296.7A 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment Active CN114490419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210143296.7A CN114490419B (en) 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210143296.7A CN114490419B (en) 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment

Publications (2)

Publication Number Publication Date
CN114490419A CN114490419A (en) 2022-05-13
CN114490419B true CN114490419B (en) 2024-02-13

Family

ID=81482182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210143296.7A Active CN114490419B (en) 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment

Country Status (1)

Country Link
CN (1) CN114490419B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841298B (en) * 2022-07-06 2022-09-27 山东极视角科技有限公司 Method and device for training algorithm model, electronic equipment and storage medium
CN115297111B (en) * 2022-07-15 2023-10-24 东风汽车集团股份有限公司 System, method and medium for managing and submitting vulnerabilities of Internet of vehicles

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678133A (en) * 2013-12-18 2014-03-26 中国科学院深圳先进技术研究院 Task scheduling system for application software cloud testing
CN109271233A (en) * 2018-07-25 2019-01-25 上海数耕智能科技有限公司 The implementation method of Hadoop cluster is set up based on Kubernetes
CN109542791A (en) * 2018-11-27 2019-03-29 长沙智擎信息技术有限公司 A kind of program large-scale concurrent evaluating method based on container technique
CN111651357A (en) * 2020-06-03 2020-09-11 厦门力含信息技术服务有限公司 Software automation testing method based on cloud computing
CN112000567A (en) * 2019-12-04 2020-11-27 国网河北省电力有限公司 Regulation and control software test service method based on cloud platform
CN112711522A (en) * 2019-10-24 2021-04-27 中国科学院深圳先进技术研究院 Docker-based cloud testing method and system and electronic equipment
CN112860572A (en) * 2021-03-12 2021-05-28 网易(杭州)网络有限公司 Cloud testing method, device, system, medium and electronic equipment of mobile terminal
CN113568791A (en) * 2021-07-14 2021-10-29 麒麟软件有限公司 Automatic testing tool and method for server operating system based on multi-CPU architecture
CN113691583A (en) * 2021-07-15 2021-11-23 上海浦东发展银行股份有限公司 Blue-green deployment-based multimedia service system and method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8104049B2 (en) * 2007-08-02 2012-01-24 International Business Machines Corporation Accessing a compatible library for an executable
US11366744B2 (en) * 2017-04-07 2022-06-21 Microsoft Technology Licensing, Llc Partitioning and orchestrating infrastructure software deployments for safety and agility across diverse configurations and hardware types
US10635476B2 (en) * 2017-05-09 2020-04-28 Microsoft Technology Licensing, Llc Cloud architecture for automated testing
US20190188116A1 (en) * 2017-12-20 2019-06-20 10546658 Canada Inc. Automated software testing method and system
US10628290B2 (en) * 2018-01-30 2020-04-21 Red Hat, Inc. Generating an inner cloud environment within an outer cloud environment for testing a microservice application
US11314627B2 (en) * 2018-04-20 2022-04-26 Sap Se Test automation system for distributed heterogenous environments

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678133A (en) * 2013-12-18 2014-03-26 中国科学院深圳先进技术研究院 Task scheduling system for application software cloud testing
CN109271233A (en) * 2018-07-25 2019-01-25 上海数耕智能科技有限公司 The implementation method of Hadoop cluster is set up based on Kubernetes
CN109542791A (en) * 2018-11-27 2019-03-29 长沙智擎信息技术有限公司 A kind of program large-scale concurrent evaluating method based on container technique
CN112711522A (en) * 2019-10-24 2021-04-27 中国科学院深圳先进技术研究院 Docker-based cloud testing method and system and electronic equipment
CN112000567A (en) * 2019-12-04 2020-11-27 国网河北省电力有限公司 Regulation and control software test service method based on cloud platform
CN111651357A (en) * 2020-06-03 2020-09-11 厦门力含信息技术服务有限公司 Software automation testing method based on cloud computing
CN112860572A (en) * 2021-03-12 2021-05-28 网易(杭州)网络有限公司 Cloud testing method, device, system, medium and electronic equipment of mobile terminal
CN113568791A (en) * 2021-07-14 2021-10-29 麒麟软件有限公司 Automatic testing tool and method for server operating system based on multi-CPU architecture
CN113691583A (en) * 2021-07-15 2021-11-23 上海浦东发展银行股份有限公司 Blue-green deployment-based multimedia service system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尼恩.Java高并发核心编程 NIO Netty Redis ZooKeeper 卷1.北京机械工业出版社,2021,(第1版),3. *

Also Published As

Publication number Publication date
CN114490419A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN114490419B (en) Heterogeneous architecture cross-cloud testing method, system and computer equipment
US20180173808A1 (en) Intent and bot based query guidance
CN108492005B (en) Project data processing method and device, computer equipment and storage medium
CN111400189A (en) Code coverage rate monitoring method and device, electronic equipment and storage medium
CN111209310A (en) Service data processing method and device based on stream computing and computer equipment
CN111459621B (en) Cloud simulation integration and scheduling method and device, computer equipment and storage medium
CN107679198B (en) Information query method and device
CN113609168B (en) Data export method, device, terminal and readable storage medium
CN113111078B (en) Resource data processing method and device, computer equipment and storage medium
CN111444309A (en) System for learning graph
CN112579426B (en) Method and device for testing object to be tested
CN113542073A (en) Product testing method, system, program product and storage medium based on P2P
CN113434382A (en) Database performance monitoring method and device, electronic equipment and computer readable medium
CN111782688A (en) Request processing method, device and equipment based on big data analysis and storage medium
Eismann et al. Teastore: A micro-service reference application for cloud researchers
CN110457122B (en) Task processing method, task processing device and computer system
CN112308074A (en) Method and device for generating thumbnail
CN111813694B (en) Test method, test device, electronic equipment and readable storage medium
CN111340237B (en) Data processing and model running method, device and computer equipment
CN112835803B (en) Tool generation method, test data construction method, device, equipment and medium
CN117271352A (en) Data processing method, device, computer equipment and storage medium
CN117807446A (en) Parameter comparison method and device, storage medium and computer equipment
CN115357496A (en) Hardware resource application method, device, storage medium and equipment
CN114840435A (en) Method, device, equipment, storage medium and program product for determining data flow direction
CN116880983A (en) Task distribution method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant