CN114490419A - Cross-cloud testing method and system of heterogeneous architecture and computer equipment - Google Patents

Cross-cloud testing method and system of heterogeneous architecture and computer equipment Download PDF

Info

Publication number
CN114490419A
CN114490419A CN202210143296.7A CN202210143296A CN114490419A CN 114490419 A CN114490419 A CN 114490419A CN 202210143296 A CN202210143296 A CN 202210143296A CN 114490419 A CN114490419 A CN 114490419A
Authority
CN
China
Prior art keywords
test
task
cloud
control center
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210143296.7A
Other languages
Chinese (zh)
Other versions
CN114490419B (en
Inventor
王威
黄井泉
尹刚
林露
喻银凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhijing Technology Co ltd
Original Assignee
Hunan Zhijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhijing Technology Co ltd filed Critical Hunan Zhijing Technology Co ltd
Priority to CN202210143296.7A priority Critical patent/CN114490419B/en
Publication of CN114490419A publication Critical patent/CN114490419A/en
Application granted granted Critical
Publication of CN114490419B publication Critical patent/CN114490419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application relates to a heterogeneous architecture cross-cloud testing method, a heterogeneous architecture cross-cloud testing system and computer equipment. The method comprises the following steps: receiving a test task sent by a cloud cluster, identifying cloud service information of the test task, and distributing the test task to a corresponding test cloud according to preset task distribution strategy configuration; the working terminal pulls a test mirror image from the private mirror image warehouse, starts a task container as a test task running carrier, and pulls a test code warehouse through a private file management service interface so as to construct a test environment and provide an external access interface; the working terminal receives the test task through the external access interface, performs program test in the test environment, runs the test code to obtain a test result, transmits the test result back to the control center, and asynchronously recovers the result; and the control center displays the result according to the result display form. The method can meet the operation scheduling requirements of massive software testing tasks on the Internet.

Description

Cross-cloud testing method and system of heterogeneous architecture and computer equipment
Technical Field
The application relates to the technical field of software testing, in particular to a heterogeneous architecture cross-cloud testing method, a heterogeneous architecture cross-cloud testing system and computer equipment.
Background
The software test is used as the guarantee of the normal and stable operation of the software application, and along with the gradual fire and heat of the cloud trend of the application, the demand of the cloud test is gradually strong. The traditional test directly provides a set of physical equipment and a matched equipment environment for testers, and has some inherent disadvantages: the method can not be reused and scaled, and extra resource cost is brought by the fact that the test cannot be released in time after the test is finished. Compared with the traditional test, the test on the cloud can be created and deployed at any time and any place, can be reused, can be released in time, and provides more flexibility and convenience. Meanwhile, different test tasks have different test architectures, so that great differences exist in the aspects of operating environment, operating resource requirements, operating modes and the like, and the cloud test also faces great challenges.
For the above challenges, the existing cloud test support methods, technologies and services are not mature, and particularly, the problem that the environment resources allocated to the test users cannot be accurately matched with the test requirements, so that the test task cannot be completed efficiently exists. Meanwhile, the services provided by the cloud service providers are complete, the support for different technical architectures is good, but the resources, the services and the prices of the service providers are distinctive and respectively suitable for different software application scenes, and the test platform products on the market are not well combined with the advantages of the cloud service providers. Therefore, in a cloud test scene, how to establish effective connection between a test task and different cloud services, the resource characteristics of each cloud service are fully utilized, and the method has great practical value on accurate matching and cost control of the test task.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a cross-cloud testing method, system and computer device for heterogeneous architecture.
A method of cross-cloud testing of a heterogeneous architecture, the method comprising:
the control center starts a monitoring thread and monitors a cloud cluster registration request so as to register the cloud cluster in the test cloud cluster group;
receiving a test task sent by a cloud cluster, identifying cloud service information of the test task, and distributing the test task to a corresponding test cloud according to preset task distribution strategy configuration; wherein the test task is a quintuple predefined by the control center, and the elements in the quintuple include: testing codes, testing sets, testing methods, operating environments and result display forms;
the working terminal pulls a test mirror image from the private mirror image warehouse, starts a task container as a test task running carrier, and pulls a test code warehouse through a private file management service interface so as to construct a test environment, wherein the test environment collects user input by providing an access interface to the outside;
the working terminal receives the input of the test task user through the external access interface, performs program test in the test environment, runs a test code to obtain a test result, transmits the test result back to the control center, and asynchronously recovers the test environment;
and the control center displays the result according to the result display form.
In one embodiment, the test codes are program codes to be tested and test program codes of the same language type, and are stored in a version library; the test set comprises a plurality of groups of input arrays and expected output arrays; constructing the test set without a corresponding interactive input acquisition environment; the test method is universal output comparison; the operating environment includes: defining a mirror image, operating a host framework requirement and defining resources during operation; the mirror image is defined as a container mirror image installed with an operating environment, the requirement of an operating host framework is a host framework on which an operating task depends, and the resources during operation are limited to a CPU upper limit, a memory upper limit and an operating duration upper limit.
In one embodiment, the method further comprises the following steps: the control center starts a monitoring thread, monitors a registration request initiated by the working terminal to the control center so as to request to join the test cloud cluster group; and the control center receives the registration request, performs authentication verification, does not discard the registration request, and adds the cloud cluster into the test cloud cluster group if the registration request is passed.
In one embodiment, the method further comprises the following steps: submitting the test task to a control center for operation according to a screening and scoring strategy of the load; the load of the test cloud is collected and returned by the working terminal calling container and the universal monitoring and collecting component of the cluster, and the control center inquires the cluster load condition returned by the working terminal in real time; and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
In one embodiment, the method further comprises the following steps: the working terminal pulls the test mirror image from the private mirror image warehouse, starts a task container to serve as a test task running carrier, and pulls the test code warehouse through the private file management service interface, so that a test environment is established, and the test environment collects user input through an externally provided access interface.
In one embodiment, the method further comprises the following steps: the working terminal runs the program to be tested in the Jupiter Notebook environment, obtains the output of the program to be tested, then runs the code of the test program, tests according to the model evaluation algorithm corresponding to the test method to obtain the test result, stores the text and the picture generated in the whole operation sequence into a file, returns the file to the control center, simultaneously marks the starting task container as the state to be recovered, and asynchronously recovers the starting task container.
In one embodiment, the method further comprises the following steps: and the working terminal runs the program to be tested in the environment of the test container to obtain the output of the program to be tested, then runs the code of the test program, tests according to the model evaluation algorithm corresponding to the test method to obtain a test result, stores the text and the picture generated in the whole operation sequence into a file, returns the file to the control center, simultaneously marks the start task container as a state to be recovered, and asynchronously recovers the task.
A heterogeneous architecture cross-cloud test system, the system comprising:
the cloud cluster deployment system comprises a control center, a test cloud cluster group and a cloud cluster deployment working terminal;
the control center starts a monitoring thread and monitors a cloud cluster registration request so as to register the cloud cluster in the test cloud cluster group; receiving a test task sent by a cloud cluster, identifying cloud service information of the test task, and distributing the test task to a corresponding test cloud according to preset task distribution strategy configuration; wherein the test task is a quintuple predefined by the control center, and the elements in the quintuple include: testing codes, testing sets, testing methods, operating environments and result display forms;
the working terminal pulls a test mirror image from the private mirror image warehouse, starts a task container as a test task running carrier, and pulls a test code warehouse through a private file management service interface so as to construct a test environment, wherein the test environment collects user input by providing an access interface to the outside; receiving the user input of the test task through the external access interface, performing program test in the test environment, running a test code to obtain a test result, transmitting the test result back to the control center, and performing asynchronous recovery on the test environment;
and the control center displays the result according to the result display form.
According to the cross-cloud testing method and system of the heterogeneous architecture, the computer equipment and the storage medium, the testing task is abstractly disassembled to be defined in characteristic, the disassembled quintuple is an identifiable schedulable index, different technical architectures and cloud resource architectures can be widely compatible, the self-defined integration expansion of the distribution strategy is provided, different testing cloud environments are effectively connected with the testing requirements of users, the problem that the testing resource distribution of the testing platform is not matched with the testing requirements is avoided, and the operation scheduling requirements of the massive software testing tasks on the Internet can be met.
Drawings
FIG. 1 is a flow diagram that illustrates a cross-cloud testing method for heterogeneous architectures, according to an embodiment;
FIG. 2 is a flowchart illustrating a cross-cloud testing method for heterogeneous architectures according to another embodiment;
FIG. 3 is a block diagram of a cross-cloud test system for heterogeneous architectures in one embodiment;
FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a cross-cloud testing method for heterogeneous architecture is provided, which includes the following steps:
step 102, the control center starts a monitoring thread and monitors a cloud cluster registration request so that the cloud cluster is registered in the test cloud cluster group.
In this embodiment, the test cloud cluster group includes a large number of test cloud clusters, the work terminals deployed on the cloud clusters send registration requests, and after authentication is successful, the cloud clusters can join the test cloud cluster group to perform cloud test.
And 104, receiving the test tasks sent by the cloud cluster, identifying the cloud service information of the test tasks, and distributing the test tasks to corresponding test clouds according to preset task distribution strategy configuration.
The testing task is a quintuple predefined by the control center, and elements in the quintuple comprise: the test code, the test set, the test method, the operation environment and the result display form are characterized by abstract disassembly of the test task, the disassembled quintuple is an identifiable schedulable index, the test code, the test set, the test method, the operation environment and the result display form can be widely compatible with different technical architectures and cloud resource architectures, and the user-defined integration expansion of the distribution strategy is provided.
And 106, the working terminal pulls the test mirror image from the private mirror image warehouse, starts a task container to serve as a test task running carrier, and pulls the test code warehouse through the private file management service interface, so that a test environment is constructed, and the test environment acquires user input by providing an access interface to the outside.
And step 108, the working terminal receives the input of the test task user through the external access interface, performs program test in the test environment, runs the test code to obtain a test result, transmits the test result back to the control center, and asynchronously recovers the test environment.
And step 110, the control center displays the result according to the result display form.
In the cross-cloud testing method of the heterogeneous architecture, the testing task is abstractly disassembled to perform characteristic definition, the disassembled quintuple is an identifiable schedulable index, the method can be widely compatible with different technical architectures and cloud resource architectures, the self-defined integration expansion of the distribution strategy is provided, different testing cloud environments are effectively connected with the testing requirements of users, the problem that the testing resource allocation of the testing platform is not matched with the testing requirements is avoided, and the operation scheduling requirements of the massive software testing tasks on the Internet can be met.
In one embodiment, the test codes are program codes to be tested and test program codes of the same language type, and are stored in a version library; the test set is a plurality of groups of text input arrays and expected output arrays; the construction of the test set does not need a corresponding interactive input acquisition environment; the test method is a universal output comparison; the operating environment comprises: defining a mirror image, operating a host framework requirement and defining resources during operation; the mirror image is defined as a container mirror image installed with an operating environment, the requirement of an operating host framework is a host framework on which an operating task depends, and the resources during operation are limited to a CPU upper limit, a memory upper limit and an operating duration upper limit.
For convenience of description, the tag is used for identification, and the quintuple comprises:
TC: program codes 1 to be tested and test program codes TestCode1 of the same language type are stored in a version library Repo 1;
TS: the Input and the output are pure texts and are a plurality of groups of text Input arrays 1 and expected output arrays ExpectedOutput1, and a corresponding interactive Input acquisition environment is not needed;
TM: the test detection method is universal output comparison;
RE: the mirror Image is defined as a container mirror Image1 of a running environment provided with T1, the Architecture requirement of a running host is Architecture1, the resource limit during running is CPU upper limit CpuLimit1, memory upper limit MemLimit1 and running duration upper limit TimeLimit 1;
RP: the result display form is text comparison, and the difference between the actual operation output and the expected output of the test program is displayed.
The feature definition for T1 is done by the tester in the system in the form of a visualization component.
In one embodiment, the control center starts a monitoring thread, and monitors a registration request initiated by the working terminal to the control center to request to join the test cloud cluster group; and the control center receives the registration request, performs authentication verification, does not discard the registration request, and adds the cloud cluster into the test cloud cluster group if the registration request is passed.
In one embodiment, a test task is submitted to a control center to operate according to a screening and scoring strategy of a load; the load of the test cloud is collected and returned by the working terminal calling container and the universal monitoring and collecting component of the cluster, and the control center inquires the cluster load condition returned by the working terminal in real time; and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
Specifically, the control center queries the Cluster load condition returned by the working terminal in real time, ranks the nodes of the test cloud Cluster, and selects the Node1 in the cloud Cluster1 with the idleness of FreeRate1 as a distribution target.
In one embodiment, the working terminal pulls the test image from the private image warehouse, starts a task container as a test task running carrier, and pulls the test code warehouse through the private file management service interface, so as to construct a test environment, wherein the test environment collects user input by providing an access interface for the outside.
Specifically, after the test task is distributed to the Node1, the working terminal deployed in the Cluster1 pulls the test Image1 from the private Image warehouse, starts the task Container1 as a test task running carrier, then pulls the user code warehouse Repo1 and a plurality of groups of predefined text test Input1 through the private version library management service interface, and constructs a jupitter Notebook test environment.
In one embodiment, the working terminal runs a program to be tested in a test container environment, obtains the output of the program to be tested, runs a test program code, tests according to a model evaluation algorithm corresponding to a test method to obtain a test result, stores texts and pictures generated in the whole operation sequence into a file, returns the file to a control center, marks a starting task container as a state to be recovered, and asynchronously recovers the starting task container.
Specifically, the method comprises the following steps. The Cluster1 working terminal detects that the local environment is Architecture1, system operating system1, user test program Code Testcode1 is operated, a language starting command of T1 under a corresponding system is called, user test program Code Testcode1 is operated, a selected test Framework Framework1 is operated, Framework1 program Code1 to be tested is operated, test input is given, a package test Result Result1 is obtained and returned to a control center, meanwhile, Container1 is marked as a state to be recovered, and recovery is carried out asynchronously.
In one embodiment, after receiving the returned test result, the control center calls and extracts the text and the picture in the actual output file to respectively compare, so as to display the test result, the resource consumption statistics in the test process, and render and display the picture generated by the test program.
The following is a specific example to further illustrate the beneficial effects of the present invention:
step 202, the control center starts a monitoring thread and monitors a cloud cluster registration request. And the working terminal is deployed to a specific testing cloud end, initiates registration to the control center and requests to join the testing cloud cluster. And the control center receives the registration request, performs authentication verification, does not discard the registration request, and adds the cluster into the test cloud cluster group if the registration request is passed.
Step 204, for a typical artificial intelligence type test task T2, consists of the following five elements:
TC: program codes to be tested 2 and testcodes to be tested 2 are stored in a version library Repo 2;
TS: the input form is Jupyter Notebook interactive environment, the data set is stored in the version library, the expected output is in a file form, comprises text and pictures and is program output expected to be obtained by operation in Jupyter Notebook;
TM: the testing method is a model evaluation algorithm Assess2, and the indexes of model accuracy, error rate and the like are considered;
RE: the mirror Image is defined as a container mirror Image2 of a running environment provided with T2, the Architecture requirement of a running host is Architecture2, the resource limit during running is GPULimit2, the CPU upper limit CpuLimit2, the memory upper limit MemLimit2 and the running duration upper limit TimeLimit 2;
RP: the result display includes the comparison of the text output generated in each step of Code2 and the comparison of the picture display generated by the model training.
The feature definition for T2 is done by the tester in the system in the form of a visualization component.
And step 206, the testing personnel selects the testing task distribution strategy as a distribution strategy based on the self-defined label, and submits the testing task to the system for operation. The Node label selected by the test task is Tags2, optionally, Tags2 may include labels such as "whether the Node is a GPU Node" and "whether the SSD is mounted", and the control center randomly selects a certain Cluster2 and a Node2 which meet the label screening condition for distribution.
And step 208, after the test tasks are distributed to the nodes 2, the working terminal deployed in the Cluster2 pulls the test Image2 from the private Image warehouse, starts a task Container2 to serve as a test task running carrier, then pulls the test code warehouse Repo2 through a private version library management service interface, exposes the Jupitter notewood environment in the Container2 to an external access port, and collects test input.
Step 210, a tester runs a program Code2 to be tested in a Jupiter Notebook environment, obtains the output ActualOutput2 of the program to be tested, runs a test program Code TestCode2, tests according to a selected model evaluation algorithm Assess2 to obtain a test Result2, stores texts and pictures generated in the whole operation sequence into a file, returns the file to a control center, simultaneously marks a Container2 as a state to be recovered, and asynchronously recovers the texts and the pictures.
And step 212, after receiving the returned test Result2, the control center calls the test Result processing module to extract texts and pictures in the actual output file for comparison respectively, prompts a tester of the test Result, gives resource consumption statistics in the test process, and renders and displays the pictures generated by the test program Code 2.
It should be understood that although the various steps in the flowcharts of fig. 1 and 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 and 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 3, there is provided a cross-cloud test system of heterogeneous architecture, comprising: control center 302, test cloud cluster group 304 and work terminal 306, wherein:
the control center 302 starts a monitoring thread and monitors a cloud cluster registration request so that the cloud cluster is registered in the test cloud cluster group 304;
the control center 302 receives a test task sent by a cloud cluster, identifies cloud service information of the test task, and distributes the test task to a corresponding test cloud according to preset task distribution strategy configuration; wherein the test task is a quintuple predefined by the control center, and the elements in the quintuple include: testing codes, testing sets, testing methods, operating environments and result display forms;
the working terminal 306 pulls the test mirror image from the private mirror image warehouse, starts a task container as a test task running carrier, and pulls the test code warehouse through the private file management service interface, so as to construct a test environment, wherein the test environment collects user input by providing an access interface to the outside;
the working terminal 306 receives the input of the test task user through the external access interface, performs program test in the test environment, runs a test code to obtain a test result, transmits the test result back to the control center, and asynchronously recovers the test environment;
the control center 302 displays the result according to the result display form.
In one embodiment, the test codes are program codes to be tested and test program codes of the same language type, and are stored in a version library; the test set comprises a plurality of groups of text input arrays and expected output arrays; constructing the test set without a corresponding interactive input acquisition environment; the test method is universal output comparison; the operating environment includes: defining a mirror image, operating a host framework requirement and defining resources during operation; the mirror image is defined as a container mirror image installed with an operating environment, the requirement of an operating host framework is a host framework on which an operating task depends, and the resources during operation are limited to a CPU upper limit, a memory upper limit and an operating duration upper limit.
In one embodiment, the control center 302 starts a monitoring thread, and monitors a registration request initiated by the working terminal 306 to the control center 302 to request to join the test cloud cluster group; the control center 302 receives the registration request, performs authentication check, and if the registration request is not discarded, adds the cloud cluster into the test cloud cluster group.
In one embodiment, the control center 302 submits the test tasks to the control center for operation according to the screening and scoring strategy of the load; the load of the test cloud is collected and returned by the working terminal calling container and the universal monitoring and collecting component of the cluster, and the control center inquires the cluster load condition returned by the working terminal in real time; and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
In one embodiment, the working terminal 306 pulls the test image from the private image repository, starts a task container as a test task running carrier, and pulls the test code repository through the private file management service interface, so as to construct a jupitter notebox environment exposure external access port for collecting the test task.
In one embodiment, the working terminal 306 runs the program to be tested in the Jupyter notewood environment, obtains the output of the program to be tested, runs the test program code, tests according to the model evaluation algorithm corresponding to the test method to obtain the test result, stores the text and the picture generated in the whole operation sequence into a file, returns the file to the control center 302, marks the start task container as the state to be recovered, and asynchronously recovers the file.
In one embodiment, after receiving the returned test result, the control center 302 calls and extracts the text and the picture in the actual output file to compare with each other, so as to display the result of the current test and the resource consumption statistics during the test, and render and display the picture generated by the test program.
For specific limitations of the cross-cloud test system of the heterogeneous architecture, reference may be made to the above limitations of the cross-cloud test method of the heterogeneous architecture, which are not described herein again. The modules in the cross-cloud test system with the heterogeneous architecture can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing data such as a private mirror image warehouse, a task container and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a cross-cloud testing method of a heterogeneous architecture.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method in the above embodiments when the processor executes the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method in the above-mentioned embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A cross-cloud testing method for heterogeneous architectures, the method comprising:
the control center starts a monitoring thread and monitors a cloud cluster registration request so as to register the cloud cluster in the test cloud cluster group;
receiving a test task sent by a cloud cluster, identifying cloud service information of the test task, and distributing the test task to a corresponding test cloud according to preset task distribution strategy configuration; wherein the test task is a quintuple predefined by the control center, and the elements in the quintuple include: testing codes, testing sets, testing methods, operating environments and result display forms;
the working terminal pulls a test mirror image from the private mirror image warehouse, starts a task container as a test task running carrier, and pulls a test code warehouse through a private file management service interface so as to construct a test environment, wherein the test environment collects user input by providing an access interface to the outside;
the working terminal receives the input of the test task user through the external access interface, performs program test in the test environment, runs a test code to obtain a test result, transmits the test result back to the control center, and asynchronously recovers the test environment;
and the control center displays the result according to the result display form.
2. The method of claim 1, wherein the test code is a program code to be tested and a test program code of the same language type, and is stored in a version library;
the test set comprises a plurality of groups of input arrays and expected output arrays; constructing the test set without a corresponding interactive input acquisition environment;
the test method is universal output comparison;
the operating environment includes: defining a mirror image, operating a host framework requirement and defining resources during operation; the mirror image is defined as a container mirror image installed with an operating environment, the requirement of an operating host framework is a host framework on which an operating task depends, and the resources during operation are limited to a CPU upper limit, a memory upper limit and an operating duration upper limit.
3. The method of claim 1, wherein the control center starts a listening thread and listens for a cloud cluster registration request to register a cloud cluster in a test cloud cluster group, and wherein the method comprises:
the control center starts a monitoring thread, monitors a registration request initiated by the working terminal to the control center so as to request to join the test cloud cluster group;
and the control center receives the registration request, performs authentication verification, does not discard the registration request, and adds the cloud cluster into the test cloud cluster group if the registration request is passed.
4. The method of claim 1, wherein distributing the test task to a corresponding test cloud according to a preset task distribution policy configuration comprises:
submitting the test task to a control center for operation according to a screening and scoring strategy of the load;
the load of the test cloud is collected and returned by the working terminal calling container and the universal monitoring and collecting component of the cluster, and the control center inquires the cluster load condition returned by the working terminal in real time;
and sequencing all the nodes of the test cloud cluster according to the load condition, and determining the nodes in the test cloud as distribution targets according to the sequencing result.
5. The method of claim 1, wherein the working terminal pulls the test image from the private image repository, starts a task container as a test task running carrier, and pulls the test code repository through the private document management service interface, thereby constructing a test environment that collects user input by providing an access interface to the outside, comprising:
the working terminal pulls the test mirror image from the private mirror image warehouse, starts a task container to serve as a test task running carrier, and pulls the test code warehouse through the private file management service interface, so that a test environment is established, and the test environment collects user input through an externally provided access interface.
6. The method of claim 5, wherein the work terminal receives the test task through the external access interface, performs program test in the test environment, runs test code to obtain test results, returns the test results to the control center, and asynchronously recycles the test environment, and comprises:
and the working terminal runs the program to be tested in the environment of the test container to obtain the output of the program to be tested, then runs the code of the test program, tests according to the model evaluation algorithm corresponding to the test method to obtain a test result, stores the text and the picture generated in the whole operation sequence into a file, returns the file to the control center, simultaneously marks the start task container as a state to be recovered, and asynchronously recovers the task.
7. The method according to claims 1 to 6, characterized in that the control center performs result presentation according to a result presentation form, comprising:
and after receiving the returned test result, the control center calls and extracts the texts and pictures in the actual output file to respectively compare so as to display the test result and the resource consumption statistics in the test process, and renders and displays the pictures generated by the test program.
8. A heterogeneous architecture cross-cloud test system, the system comprising: the cloud cluster deployment system comprises a control center, a test cloud cluster group and a cloud cluster deployment working terminal;
the control center starts a monitoring thread and monitors a cloud cluster registration request so as to register the cloud cluster in the test cloud cluster group; receiving a test task sent by a cloud cluster, identifying cloud service information of the test task, and distributing the test task to a corresponding test cloud according to preset task distribution strategy configuration; wherein the test task is a quintuple predefined by the control center, and the elements in the quintuple include: testing codes, testing sets, testing methods, operating environments and result display forms;
the working terminal pulls a test mirror image from the private mirror image warehouse, starts a task container as a test task running carrier, and pulls a test code warehouse through a private file management service interface so as to construct a test environment, wherein the test environment collects user input by providing an access interface to the outside; receiving the user input of the test task through the external access interface, performing program test in the test environment, running a test code to obtain a test result, transmitting the test result back to the control center, and performing asynchronous recovery on the test environment;
and the control center displays the result according to the result display form.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210143296.7A 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment Active CN114490419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210143296.7A CN114490419B (en) 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210143296.7A CN114490419B (en) 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment

Publications (2)

Publication Number Publication Date
CN114490419A true CN114490419A (en) 2022-05-13
CN114490419B CN114490419B (en) 2024-02-13

Family

ID=81482182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210143296.7A Active CN114490419B (en) 2022-02-16 2022-02-16 Heterogeneous architecture cross-cloud testing method, system and computer equipment

Country Status (1)

Country Link
CN (1) CN114490419B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841298A (en) * 2022-07-06 2022-08-02 山东极视角科技有限公司 Method and device for training algorithm model, electronic equipment and storage medium
CN115297111A (en) * 2022-07-15 2022-11-04 东风汽车集团股份有限公司 System, method and medium for vulnerability management and submission of Internet of vehicles

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090037933A1 (en) * 2007-08-02 2009-02-05 Shajith Chandran Method and apparatus for accessing a compatible library for an executable
CN103678133A (en) * 2013-12-18 2014-03-26 中国科学院深圳先进技术研究院 Task scheduling system for application software cloud testing
US20180293152A1 (en) * 2017-04-07 2018-10-11 Microsoft Technology Licensing, Llc Partitioning and orchestrating infrastructure software deployments for safety and agility across diverse configurations and hardware types
US20180329788A1 (en) * 2017-05-09 2018-11-15 Microsoft Technology Licensing, Llc Cloud Architecture for Automated Testing
CN109271233A (en) * 2018-07-25 2019-01-25 上海数耕智能科技有限公司 The implementation method of Hadoop cluster is set up based on Kubernetes
CN109542791A (en) * 2018-11-27 2019-03-29 长沙智擎信息技术有限公司 A kind of program large-scale concurrent evaluating method based on container technique
US20190188116A1 (en) * 2017-12-20 2019-06-20 10546658 Canada Inc. Automated software testing method and system
US20190235993A1 (en) * 2018-01-30 2019-08-01 Red Hat, Inc. Generating an inner cloud environment within an outer cloud environment for testing a microservice application
US20190324897A1 (en) * 2018-04-20 2019-10-24 Sap Se Test automation system for distributed heterogenous environments
CN111651357A (en) * 2020-06-03 2020-09-11 厦门力含信息技术服务有限公司 Software automation testing method based on cloud computing
CN112000567A (en) * 2019-12-04 2020-11-27 国网河北省电力有限公司 Regulation and control software test service method based on cloud platform
CN112711522A (en) * 2019-10-24 2021-04-27 中国科学院深圳先进技术研究院 Docker-based cloud testing method and system and electronic equipment
CN112860572A (en) * 2021-03-12 2021-05-28 网易(杭州)网络有限公司 Cloud testing method, device, system, medium and electronic equipment of mobile terminal
CN113568791A (en) * 2021-07-14 2021-10-29 麒麟软件有限公司 Automatic testing tool and method for server operating system based on multi-CPU architecture
CN113691583A (en) * 2021-07-15 2021-11-23 上海浦东发展银行股份有限公司 Blue-green deployment-based multimedia service system and method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090037933A1 (en) * 2007-08-02 2009-02-05 Shajith Chandran Method and apparatus for accessing a compatible library for an executable
CN103678133A (en) * 2013-12-18 2014-03-26 中国科学院深圳先进技术研究院 Task scheduling system for application software cloud testing
US20180293152A1 (en) * 2017-04-07 2018-10-11 Microsoft Technology Licensing, Llc Partitioning and orchestrating infrastructure software deployments for safety and agility across diverse configurations and hardware types
US20180329788A1 (en) * 2017-05-09 2018-11-15 Microsoft Technology Licensing, Llc Cloud Architecture for Automated Testing
US20190188116A1 (en) * 2017-12-20 2019-06-20 10546658 Canada Inc. Automated software testing method and system
US20190235993A1 (en) * 2018-01-30 2019-08-01 Red Hat, Inc. Generating an inner cloud environment within an outer cloud environment for testing a microservice application
US20190324897A1 (en) * 2018-04-20 2019-10-24 Sap Se Test automation system for distributed heterogenous environments
CN109271233A (en) * 2018-07-25 2019-01-25 上海数耕智能科技有限公司 The implementation method of Hadoop cluster is set up based on Kubernetes
CN109542791A (en) * 2018-11-27 2019-03-29 长沙智擎信息技术有限公司 A kind of program large-scale concurrent evaluating method based on container technique
CN112711522A (en) * 2019-10-24 2021-04-27 中国科学院深圳先进技术研究院 Docker-based cloud testing method and system and electronic equipment
CN112000567A (en) * 2019-12-04 2020-11-27 国网河北省电力有限公司 Regulation and control software test service method based on cloud platform
CN111651357A (en) * 2020-06-03 2020-09-11 厦门力含信息技术服务有限公司 Software automation testing method based on cloud computing
CN112860572A (en) * 2021-03-12 2021-05-28 网易(杭州)网络有限公司 Cloud testing method, device, system, medium and electronic equipment of mobile terminal
CN113568791A (en) * 2021-07-14 2021-10-29 麒麟软件有限公司 Automatic testing tool and method for server operating system based on multi-CPU architecture
CN113691583A (en) * 2021-07-15 2021-11-23 上海浦东发展银行股份有限公司 Blue-green deployment-based multimedia service system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
尼恩: "Java高并发核心编程 NIO Netty Redis ZooKeeper 卷1", vol. 1, 北京机械工业出版社, pages: 3 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841298A (en) * 2022-07-06 2022-08-02 山东极视角科技有限公司 Method and device for training algorithm model, electronic equipment and storage medium
CN114841298B (en) * 2022-07-06 2022-09-27 山东极视角科技有限公司 Method and device for training algorithm model, electronic equipment and storage medium
CN115297111A (en) * 2022-07-15 2022-11-04 东风汽车集团股份有限公司 System, method and medium for vulnerability management and submission of Internet of vehicles
CN115297111B (en) * 2022-07-15 2023-10-24 东风汽车集团股份有限公司 System, method and medium for managing and submitting vulnerabilities of Internet of vehicles

Also Published As

Publication number Publication date
CN114490419B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN111340237B (en) Data processing and model running method, device and computer equipment
CN114490419A (en) Cross-cloud testing method and system of heterogeneous architecture and computer equipment
CN108492005B (en) Project data processing method and device, computer equipment and storage medium
CN110362492B (en) Artificial intelligence algorithm testing method, device, server, terminal and storage medium
CN110321284B (en) Test data entry method, device, computer equipment and storage medium
CN110704177B (en) Computing task processing method and device, computer equipment and storage medium
CN106970880A (en) A kind of distributed automatization method for testing software and system
CN106612204B (en) Service checking method and device
CN110798376A (en) Interface testing method and device, computer equipment and storage medium
CN110717647A (en) Decision flow construction method and device, computer equipment and storage medium
Sundas et al. An introduction of CloudSim simulation tool for modelling and scheduling
CN111984527A (en) Software performance testing method, device, equipment and medium
CN111209061A (en) Method and device for filling in user information, computer equipment and storage medium
CN111506388B (en) Container performance detection method, container management platform and computer storage medium
CN112685462A (en) Feeder line data analysis method and device, computer equipment and storage medium
CN112862455A (en) Test execution work order generation method and device, computer equipment and storage medium
CN116662132A (en) Evaluation method, virtual deployment method, computer device, and storage medium
CN111538672A (en) Test case layered test method, computer device and computer-readable storage medium
US11714687B2 (en) Dynamic preparation of a new network environment, and subsequent monitoring thereof
CN108390924A (en) Order fulfillment method and device
CN114564249A (en) Recommendation scheduling engine, recommendation scheduling method, and computer-readable storage medium
CN109543479B (en) Code scanning method, device, equipment and storage medium
CN111782688A (en) Request processing method, device and equipment based on big data analysis and storage medium
CN113032594B (en) Label image storage method, apparatus, computer device and storage medium
CN110909761A (en) Image recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant