WO2023185482A1 - 测试方法、存储介质及电子装置 - Google Patents

测试方法、存储介质及电子装置 Download PDF

Info

Publication number
WO2023185482A1
WO2023185482A1 PCT/CN2023/081741 CN2023081741W WO2023185482A1 WO 2023185482 A1 WO2023185482 A1 WO 2023185482A1 CN 2023081741 W CN2023081741 W CN 2023081741W WO 2023185482 A1 WO2023185482 A1 WO 2023185482A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
task
pool
test task
under test
Prior art date
Application number
PCT/CN2023/081741
Other languages
English (en)
French (fr)
Inventor
杨光
蒋学鑫
仉亚男
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023185482A1 publication Critical patent/WO2023185482A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • Embodiments of the present invention relate to the field of automated testing, and specifically, to a testing method, a storage medium and an electronic device.
  • Automated testing is a technology that completely replaces tedious and repetitive manual operations and inspection processes with machines. As product equipment in various industries becomes more and more complex, and quality requirements are getting higher and higher, testing is becoming more and more important in all aspects of development and production. Automated testing is being used more and more widely, and the efficiency of automated testing is also improving. Become more and more valued.
  • the automated test architecture mainly consists of two parts. One is the device under test, and the other is the test execution monitoring machine that links the device under test.
  • the test execution monitoring machine is referred to as the test execution machine later.
  • the test execution machine can automatically send messages to the device under test. Send a test command and obtain the command execution results.
  • Automated test tasks generally include multiple test commands, and specify the order and execution time of the test commands. Then, the obtained execution results are used to determine whether the function or system meets expectations and obtain the test results.
  • test execution machines In the traditional automated testing architecture, the number of test execution machines and the number of devices under test are not equal. There will still be test execution machines or devices under test that are idle, resulting in low execution efficiency of multiple test tasks.
  • Embodiments of the present invention provide a testing method, a storage medium, and an electronic device to at least solve the problem of low execution efficiency of multiple testing tasks in related technologies.
  • inventions of the present invention provide a testing method that is applied to a device pool.
  • the device pool includes a scheduler and at least one device under test corresponding to a target device type, including:
  • test task delivery request sent by the test server, where the test task delivery request includes the target device type required by the test task;
  • embodiments of the present invention provide a testing method, which is applied to testing servers, including:
  • test task delivery request includes the target device type required by the test task
  • embodiments of the present invention provide a storage medium in which a computer program is stored, wherein the computer program is configured to execute the testing method described in the first aspect when running, or to execute the third aspect.
  • the test methods described in the second aspect are described in the third aspect.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps described in the first aspect. test method, or execute the test method described in the second aspect.
  • Figure 1 is a schematic diagram of the architecture of the test system in the embodiment of the present application.
  • Figure 2 is an architectural schematic diagram of a test system in an exemplary embodiment of the present application
  • Figure 3 is a schematic flowchart of a method applied to testing a device pool in an embodiment of the present application
  • Figure 4 is a schematic flowchart of a method for detecting whether there is a fault in the device under test in an exemplary embodiment of the present application
  • Figure 5 is a schematic flowchart of a method applied to testing a test server in an embodiment of the present application
  • Figure 6 is a schematic diagram of the basic architecture of the test system in an exemplary embodiment of the present application.
  • Figure 7 is a schematic diagram of the specific architecture of the test system in an exemplary embodiment of the present application.
  • Figure 8 is a schematic diagram of the health status of each device under test in the test system in an exemplary embodiment of the present application.
  • Figure 9 is a schematic diagram of the basic architecture of the test system in an exemplary embodiment of the present application.
  • Figure 10 is a schematic diagram of the specific architecture of the test system in an exemplary embodiment of the present application.
  • a test system including a device pool 101 and a test server 102.
  • the device pool 101 includes a scheduler 1011 and at least one device under test corresponding to the target device type.
  • Figure 1 only shows that the device pool 101 includes the scheduler 1011 and the device under test A and device under test B corresponding to the target device type.
  • the reference number of the device under test A is 1012, and the drawing of the device under test B is Marked 1013
  • Figure 1 is only a schematic diagram and does not limit the number of devices under test in the device pool. According to needs, the number of devices under test in the device pool can be 1, 3, or more.
  • the scheduler is set to obtain the test task delivery request sent by the test server.
  • the test task delivery request includes the target device type required by the test task; confirm that the device pool is Check whether there is a target device under test that meets the test task execution conditions, and obtain the confirmation result; send feedback information to the test server based on the confirmation result;
  • the test server is set to read test tasks from the head of the task queue; obtain the target device type corresponding to the test task; find the device pool corresponding to the target device type, and establish a communication link with the device pool; send a test task delivery request to the device Pool; obtain feedback information sent by the device pool.
  • a test actuator is provided in the device under test
  • the scheduler is specifically configured to send the first feedback information to the test server when the target device under test exists in the device pool, obtain the test task sent by the test server, and send the test task to the target device under test.
  • the test executor in the device executes the test task, where the first feedback information is used to instruct the issuance of the test task; when the target device under test does not exist in the device pool, the second feedback information is sent to the test server, where the The second feedback information is used to indicate that the test task delivery failed.
  • the test server includes a test task manager, a task queue, and a test task issuer.
  • Test task manager set up to send test tasks to the head of the task queue.
  • the test task deliverer is set to read test tasks from the head of the task queue; obtain the target device type corresponding to the test task; find the device pool corresponding to the target device type, and establish a communication link with the device pool; send the test task for delivery Request to the device pool; obtain the feedback information sent by the device pool.
  • the test system includes device pool A, device pool B and a test server.
  • the number of device pools is only for illustration and does not limit the number of device pools.
  • Device pool A includes the scheduler, the device under test with device number A_1, and the device under test with device number A_2.
  • the device types corresponding to device numbers A_1 and A_2 are A.
  • the device under test with device number A_1 is equipped with a test executor.
  • the device under test with device number A_2 is equipped with a test actuator.
  • Device pool B includes the scheduler, the device under test with device number B_1 and the device under test with device number B_2.
  • the device types corresponding to device numbers B_1 and B_2 are B.
  • the device under test with device number B_1 is equipped with a test executor.
  • the device under test with device number B_2 is configured with a test execution walking device.
  • Figure 2 is only a schematic diagram and does not limit the number of devices under test in the device pool.
  • the test server includes a test task manager, a task queue and a test task issuer.
  • a testing method is provided.
  • the following embodiment mainly introduces the process of implementing testing through the test system provided in the embodiment of this application, and explains the testing method from the perspectives of device pool and test server respectively.
  • the device pool includes a scheduler and at least one device under test corresponding to the target device type.
  • the method flow applied to the testing of the device pool mainly includes:
  • Step 301 Obtain a test task delivery request sent by the test server, where the test task delivery request includes the target device type required by the test task.
  • Step 302 Confirm whether there is a target device under test that meets the test task execution conditions in the device pool, and obtain the confirmation result.
  • confirming whether the target device under test exists in the device pool and obtaining the confirmation result includes: polling the device under test in the device pool in a preset order; wherein, during the polling process, determining Whether the polled device under test meets the test task execution conditions; when the polled device under test meets the test task execution conditions, stop polling the device under test in the device pool in the preset sequence, and Use the polled device under test as the target device under test; or if the polled device under test does not meet the test task execution conditions, return to execution and poll the devices under test in the device pool in the preset order. step.
  • the preset order may be a randomly generated and fixed order among the devices under test when the device pool is created, or it may be a manually set order among the devices under test in the device pool.
  • the test task execution conditions include that the device under test has no fault, and the device under test is in an idle state and is not executing a test task.
  • the testing method applied to the device pool also includes: every preset time period, the device under test executes a self-detection test task and obtains an execution result; when the execution result is that the self-detection test task is successfully executed , it is determined that the device under test is fault-free; or when the execution result is that the self-test test task fails, it is determined that the device under test is faulty.
  • the method flow for detecting whether there is a fault in the device under test includes:
  • Step 401 Determine whether the set time has been exceeded. If the set time has been exceeded, step 402 is executed, or if the set time has not been exceeded, step 401 is returned to execution.
  • the set time is the preset duration.
  • Step 402 Start performing health check.
  • Step 403 Determine whether the test passes. If the test passes, perform step 404, or if the test fails, perform step 405.
  • Step 404 Set health status.
  • Step 405 set the fault status.
  • step 404 After executing step 404 or step 405, return to step 401.
  • the preset duration can be an empirical value or a value obtained from multiple trials.
  • the preset time period can be 24 hours.
  • the device under test performs self-testing tasks without the intervention of the test server, reducing the interaction between the test server and the device under test and further improving the efficiency of the overall testing process.
  • Step 303 Send feedback information to the test server according to the confirmation result.
  • sending feedback information to the test server according to the confirmation result includes: when the target device under test exists in the device pool, sending first feedback information to the test server, where the first feedback information It is used to instruct the delivery of the test task; or when the target device under test does not exist in the device pool, the second feedback information is sent to the test server, where the second feedback information is used to indicate the failure to deliver the test task.
  • the device pool obtains the test task delivery request sent by the test server, and then promptly sends feedback information to the test server based on the confirmation result to instruct the test server's next action. This can prevent the test server from directly delivering the test task to the device pool. There is no device pool that can execute The device under test that performs the test task will cause the test task to fail and affect the efficiency of the entire test process.
  • the test method applied to the device pool further includes: obtaining the test task sent by the test server; sending the test task to the target device under test, and the target device under test is Test equipment performs testing tasks.
  • the method flow applied to the testing of the test server mainly includes:
  • Step 501 Read the test task from the head of the task queue.
  • Step 502 Obtain the target device type corresponding to the test task.
  • Step 503 Search for a device pool corresponding to the target device type, and establish a communication link with the device pool, where the device pool includes at least one device under test corresponding to the target device type.
  • Step 504 Send a test task delivery request to the device pool, where the test task delivery request includes the target device type required by the test task.
  • Step 505 Obtain feedback information sent by the device pool.
  • the testing method applied to the test server further includes: when the feedback information is the first feedback information, sending the test task to the device pool, where the first One feedback information is used to instruct the issuance of test tasks.
  • the test task is input to the end of the task queue, where the second feedback information is used to indicate the failure to deliver the test task; a new test is read from the head of the task queue Task; use the new test task as a test task and return to the steps of obtaining the target device type corresponding to the test task.
  • the test task is input to the end of the task queue, a new test task is read from the head of the task queue, and then the step of obtaining the target device type is performed on the new test task.
  • the test task fails to be delivered, the test task is input to the end of the task queue in a timely manner, which does not affect the next delivery and testing of the test task.
  • the new test task is read from the head of the task queue in a timely manner, so that Multiple test tasks are processed in parallel to improve multiple Test efficiency of test tasks.
  • the test content is to test the CPU-related functions of the device under test.
  • the test system includes a test server and four devices under test. The test server and the devices under test are linked through a serial port.
  • the test server is an x86 server.
  • the device names of the four tested devices are thunderx, upsquar, tegra and sharkl.
  • the CPUs of the tested devices thunderx and upsquar are cortex-M2 type devices, and the CPUs of the tested devices tegra and sharkl All belong to cortex-A15 type devices.
  • the x86 server includes a test task manager, a task queue, and a test task issuer.
  • the device pool cortex-M2 includes a scheduler, the device under test thunderx, and the device under test upsquar.
  • the device under test thunderx is configured with a test
  • the executor is set in the device under test upsquar.
  • the device pool cortex-A15 includes the scheduler, the device under test tegra and the device under test sharkl.
  • the test executor is set in the device under test tegra and the device under test sharkl is set in There are test executors.
  • test tasks 1. Test the computing performance of cortex-M2; 2. Test the static power consumption of cortex-M2; 3. Test the computing performance of cortex-A15; 4. Test the static power consumption of cortex-A15.
  • the test task issuer reads the queue head test task 1, parses and identifies the matching cortex-M2 type device under test, and attempts to use this task. Sent to the scheduler of the cortex-M2 device pool. After receiving the request, the scheduler traverses the devices under test in the device pool in sequence according to the pre-stored order (1.thunderx, 2.upsquar).
  • test task dispatcher When it traverses to thunderx, it finds this If the device under test can execute the test, it will accept this task, this task will also be removed from the queue, and this task will be sent to the test executor of the thunderx device to start executing automatic testing.
  • the test task dispatcher After the test task dispatcher successfully sent the test task, it checked test task 2 from the test task queue, parsed and identified the device under test that matched the cortex-M2 type, and tried to send the task to the scheduler of the cortex-M2 device pool again. The scheduler traverses and finds that thunderx is executing the test task and the conditions are not met, while upsquar meets the test conditions, so it receives the test task and sends it to upsquar's test executor to start executing the test task.
  • test task 3 and test task 4 are assigned to the tegra and sharkl devices in turn.
  • the four test devices are all running different test tasks at the same time.
  • each test executor executes the test tasks in sequence and outputs the test results.
  • the device thunderx under test was stuck. After a 1-hour health check, the health status of the device was set to fault. At this time, a cortex-M2 type test task is created. When it reaches the scheduler of the cortex-M2 device pool, polling finds that the thunderx device is in a fault state and cannot execute the test task. Then it continues to poll the upsquar device and finds that the device status is If the device is healthy and idle and can perform test tasks, allocate test tasks to the device under test for execution. If there is no health check function, the scheduler will assign test tasks to idle and faulty thunderx, causing the test tasks to be stuck, wasting time and reducing efficiency, and the test results will be affected by equipment failures and lose their reference value.
  • the test content is to test the compatibility of the application on different mobile phone systems.
  • a main PC computer is connected to an iPhone 13 with iOS 15 system through a data cable.
  • two other ZTE-S30 connected to the iPhone 12 system with iOS 15 system and MyOS 11 system can be accessed through the network, as well as A mobile phone ZTE-Z40 with MyOS11 system directly connected to the Internet.
  • the main PC includes the test task manager, task queue and test task issuer.
  • the device pool iOS15 includes the scheduler, the device under test iPhone12 and the device under test iPhone13.
  • the device under test iPhone12 is equipped with a test executor.
  • the device under test iPhone13 is equipped with a test executor
  • the device pool MyOS11 includes the scheduler
  • the device under test ZTE-S30 is equipped with a test executor
  • the device under test ZTE -Z40 is equipped with a test executor.
  • the test system includes a device pool and a test server, and the device pool includes a scheduler and a device corresponding to the target device type. At least one device under test. Since there is no difference in the results of the test task running on multiple devices under test of the same type, the test task only needs to focus on the device type and does not need to know which device under test it is running on. In the embodiment of the present invention, at least one device under test with the same device type is planned to be managed in the same device pool. One device pool corresponds to one device type.
  • the test server obtains the target device type corresponding to the test task and searches The device pool corresponding to the target device type, and establishes a communication link with the device pool, and then sends a test task delivery request to the device pool.
  • the test server does not need to deliver the test task to the specific device under test, it only needs to find the corresponding target device type.
  • the device pool sends a test task delivery request to the device pool, and then the scheduler in the device pool confirms whether there is a target device under test that meets the test task execution conditions, obtains the confirmation result, and sends feedback information to the test server based on the confirmation result. .
  • the test server sends a test task delivery request to the device pool corresponding to the target device type.
  • the scheduler in the device pool searches for the target device under test that can perform the test task, and the test server delivers the test task to the device.
  • the test server can continue to send new test task delivery requests to the device pool.
  • Each device under test in the device pool can perform different test tasks at the same time. Being able to execute test tasks in parallel can greatly improve the utilization of the equipment under test, shorten the total execution time of multiple test tasks, and solve the problem of low execution efficiency of multiple test tasks.
  • it can be implemented with only one test server, which is low-cost and highly flexible in use.
  • Embodiments of the present invention also provide a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • the above-mentioned storage medium may include but is not limited to: U disk, read-only memory (Read-Only Memory, referred to as ROM), random access memory (Random Access Memory, referred to as RAM), mobile hard disk , magnetic disks or optical disks and other media that can store computer programs.
  • An embodiment of the present invention also provides an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.
  • each module or each step of the above-mentioned embodiments of the present invention can be implemented by a general-purpose computing device. They can be concentrated on a single computing device, or distributed among multiple computing devices. on a network, optionally, they may be implemented in program code executable by a computing device, such that they may be stored in a storage device for execution by the computing device, and in some cases, may be implemented in a manner different from that described herein The sequence of execution is shown or described The above-mentioned steps may be implemented by making them into individual integrated circuit modules, or by making multiple modules or steps into a single integrated circuit module. As such, embodiments of the present invention are not limited to any specific combination of hardware and software.

Abstract

本发明实施例提供了一种测试方法、存储介质及电子装置,涉及自动化测试领域。该测试方法由设备池和测试服务器配合实现,设备池中包括调度器以及与目标设备类型对应的至少一个被测设备,应用于设备池的测试方法包括:获取测试服务器发送的测试任务下发请求,其中,测试任务下发请求中包括测试任务所需要的目标设备类型;确认设备池中是否存在满足测试任务执行条件的目标被测设备,获得确认结果;根据确认结果,发送反馈信息至测试服务器。本发明实施例用以解决多个测试任务执行效率低的问题。

Description

测试方法、存储介质及电子装置 技术领域
本发明实施例涉及自动化测试领域,具体而言,涉及一种测试方法、存储介质及电子装置。
背景技术
自动化测试是一种把繁琐重复的人工操作和检查过程,完全由机器来替代实现的技术。随着各行各业的产品设备越来越复杂,对质量要求越来越高,测试在开发生产的各个环节中越来越重要,自动化测试被应用的越来越广泛,自动化测试的效率提升也越来越被重视。
目前,自动化测试架构主要由两部分组成,一个是被测试设备,另一个是链接被测试设备的测试执行监控机,后面测试执行监控机简称为测试执行机,测试执行机可以自动化向被测设备发送测试命令,并获取命令执行结果。自动化测试任务一般包含多个测试命令,并规定了测试命令执行的先后顺序及执行时间,之后通过获取的执行结果判断功能或系统是否达到预期,获得测试结果。
传统的自动化测试架构,测试执行机数量和被测设备的数量不对等,仍然会有测试执行机或被测设备处于空闲状态,导致多个测试任务执行效率低。
发明内容
本发明实施例提供了一种测试方法、存储介质及电子装置,以至少解决相关技术中多个测试任务执行效率低的问题。
第一方面,本发明实施例提供了一种测试方法,应用于设备池,所述设备池中包括调度器以及与目标设备类型对应的至少一个被测设备,包括:
获取测试服务器发送的测试任务下发请求,其中,所述测试任务下发请求中包括测试任务所需要的目标设备类型;
确认所述设备池中是否存在满足测试任务执行条件的目标被测设备,获得确认结果;
根据所述确认结果,发送反馈信息至所述测试服务器。
第二方面,本发明实施例提供了一种测试方法,应用于测试服务器,包括:
从任务队列的首端读取测试任务;
获取所述测试任务对应的目标设备类型;
查找所述目标设备类型对应的设备池,并与所述设备池建立通信链接,其中,所述设备池中包括与所述目标设备类型对应的至少一个被测设备;
发送测试任务下发请求至所述设备池,其中,所述测试任务下发请求中包括所述测试任务所需要的所述目标设备类型;
获取所述设备池发送的反馈信息。
第三方面,本发明实施例提供了一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行第一方面所述的测试方法,或者,执行第二方面所述的测试方法。
第四方面,本发明实施例提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行第一方面所述的测试方法,或者,执行第二方面所述的测试方法。
附图说明
此处所说明的附图用来提供对本发明实施例的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明实施例,并不构成对本发明实施例的不当限定。在附图中:
图1是本申请实施例中测试系统的架构示意图;
图2是本申请在一示例性的实施例中测试系统的架构示意图;
图3是本申请实施例中应用于设备池的测试的方法流程示意图;
图4是本申请在一示例性的实施例中检测被测设备是否存在故障的方法流程示意图;
图5是本申请实施例中应用于测试服务器的测试的方法流程示意图;
图6是本申请在一示例性的实施例中测试系统的基本架构示意图;
图7是本申请在一示例性的实施例中测试系统的具体架构示意图;
图8是本申请在一示例性的实施例中测试系统中各个被测设备健康状态的示意图;
图9是本申请在一示例性的实施例中测试系统的基本架构示意图;
图10是本申请在一示例性的实施例中测试系统的具体架构示意图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本发明实施例。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本发明实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请实施例中,提供了一种测试系统,如图1所示,包括设备池101和测试服务器102,设备池101中包括调度器1011以及与目标设备类型对应的至少一个被测设备。图1中仅展示出了设备池101中包括调度器1011以及和目标设备类型对应的被测设备A和被测设备B,被测设备A的附图标记为1012,被测设备B的附图标记为1013,图1仅是一个示意图,并不限制设备池中被测设备的数量,根据需要,设备池中被测设备的数量可以是1个,3个,或者更多。
调度器,设置为获取测试服务器发送的测试任务下发请求,其中,测试任务下发请求中包括测试任务所需要的目标设备类型;确认设备池中是 否存在满足测试任务执行条件的目标被测设备,获得确认结果;根据确认结果,发送反馈信息至测试服务器;
测试服务器,设置为从任务队列的首端读取测试任务;获取测试任务对应的目标设备类型;查找目标设备类型对应的设备池,并与设备池建立通信链接;发送测试任务下发请求至设备池;获取设备池发送的反馈信息。
在一示例性的实施例中,被测设备中设置有测试执行器;
调度器,具体设置为在设备池中存在目标被测设备的情况下,发送第一反馈信息至测试服务器,获取测试服务器发送的测试任务,将测试任务发送至目标被测设备,由目标被测设备中的测试执行器执行测试任务,其中,第一反馈信息用于指示下发测试任务;在设备池中不存在目标被测设备的情况下,发送第二反馈信息至测试服务器,其中,第二反馈信息用于指示测试任务下发失败。
在一示例性的实施例中,测试服务器包括测试任务管理器、任务队列和测试任务下发器。
测试任务管理器,设置为将测试任务发送至任务队列的首端。
测试任务下发器,设置为从任务队列的首端读取测试任务;获取测试任务对应的目标设备类型;查找目标设备类型对应的设备池,并与设备池建立通信链接;发送测试任务下发请求至设备池;获取设备池发送的反馈信息。
在一示例性的实施例中,如图2所示,测试系统包括设备池A、设备池B和测试服务器,图2中,设备池的数量仅为示意,并不限制设备池的数量。设备池A中包括调度器、设备编号A_1的被测设备和设备编号A_2的被测设备,设备编号A_1、A_2对应的设备类型为A,设备编号A_1的被测设备中设置有测试执行器,设备编号A_2的被测设备中设置有测试执行器。设备池B中包括调度器、设备编号B_1的被测设备和设备编号B_2的被测设备,设备编号B_1、B_2对应的设备类型为B,设备编号B_1的被测设备中设置有测试执行器,设备编号B_2的被测设备中设置有测试执 行器。图2仅是一个示意图,并不限制设备池中被测设备的数量。测试服务器包括测试任务管理器、任务队列和测试任务下发器。
本申请实施例中,提供了一种测试方法,以下实施例主要介绍通过本申请实施例中提供的测试系统实现测试的过程,分别从设备池和测试服务器两个角度对测试方法进行说明。其中,设备池中包括调度器以及与目标设备类型对应的至少一个被测设备。
本申请实施例中,如图3所示,应用于设备池的测试的方法流程主要包括:
步骤301,获取测试服务器发送的测试任务下发请求,其中,测试任务下发请求中包括测试任务所需要的目标设备类型。
步骤302,确认设备池中是否存在满足测试任务执行条件的目标被测设备,获得确认结果。
在一示例性的实施例中,确认设备池中是否存在目标被测设备,获得确认结果,包括:按照预设顺序,轮询设备池中的被测设备;其中,在轮询过程中,判断轮询到的被测设备是否满足测试任务执行条件;在轮询到的被测设备满足测试任务执行条件的情况下,停止按照预设顺序,轮询设备池中的被测设备的步骤,并将轮询到的被测设备作为目标被测设备;或在轮询到的被测设备不满足测试任务执行条件的情况下,返回执行按照预设顺序,轮询设备池中的被测设备的步骤。
其中,预设顺序可以是设备池创建时随机生成并固定下来的各个被测设备之间的顺序,也可以是手动设定的设备池中各个被测设备之间的顺序。
在一示例性的实施例中,测试任务执行条件包括被测设备无故障,且被测设备处于空闲状态,没有正在执行测试任务。
在一示例性的实施例中,应用于设备池的测试方法还包括:每隔预设时长,被测设备执行自检测试任务,获得执行结果;在执行结果为自检测试任务执行成功的情况下,确定被测设备无故障;或在执行结果为自检测试任务执行失败的情况下,确定被测设备存在故障。
在一示例性的实施例中,如图4所示,检测被测设备是否存在故障的方法流程包括:
步骤401,判断是否超过设定时间,在超过设定时间的情况下,执行步骤402,或在未超过设定时间的情况下,返回执行步骤401。
其中,设定时间就是预设时长。
步骤402,开始执行健康检查。
步骤403,判断是否检测通过,在检测通过的情况下,执行步骤404,或在没有检测通过的情况下,执行步骤405。
步骤404,设置健康状态。
步骤405,设置故障状态。
执行步骤404或步骤405之后,返回执行步骤401。
其中,预设时长,可以是经验值,也可以是多次试验得到的数值。例如,预设时长可以是24小时。
通过定时的健康检查功能,让环境故障提前被识别,降低被测设备故障带来的测试结果及进度的影响。而且是被测设备执行自检测试任务,无需测试服务器干预,减少测试服务器和被测设备之间的交互,进一步提升整体测试流程的效率。
步骤303,根据确认结果,发送反馈信息至测试服务器。
在一示例性的实施例中,根据确认结果,发送反馈信息至测试服务器,包括:在设备池中存在目标被测设备的情况下,发送第一反馈信息至测试服务器,其中,第一反馈信息用于指示下发测试任务;或在设备池中不存在目标被测设备的情况下,发送第二反馈信息至测试服务器,其中,第二反馈信息用于指示测试任务下发失败。
设备池获取测试服务器发送的测试任务下发请求,然后及时根据确认结果,发送反馈信息至测试服务器,来指示测试服务器的下一步动作,能够避免测试服务器直接将测试任务下发至设备池,而设备池中没有能够执 行测试任务的被测设备,导致测试任务执行失败,影响整个测试流程的效率。
在一示例性的实施例中,发送第一反馈信息至测试服务器之后,应用于设备池的测试方法还包括:获取测试服务器发送的测试任务;将测试任务发送至目标被测设备,由目标被测设备执行测试任务。
本申请实施例中,如图5所示,应用于测试服务器的测试的方法流程主要包括:
步骤501,从任务队列的首端读取测试任务。
步骤502,获取测试任务对应的目标设备类型。
步骤503,查找目标设备类型对应的设备池,并与设备池建立通信链接,其中,设备池中包括与目标设备类型对应的至少一个被测设备。
步骤504,发送测试任务下发请求至设备池,其中,测试任务下发请求中包括测试任务所需要的目标设备类型。
步骤505,获取设备池发送的反馈信息。
在一示例性的实施例中,获取设备池发送的反馈信息之后,应用于测试服务器的测试方法还包括:在反馈信息为第一反馈信息的情况下,发送测试任务至设备池,其中,第一反馈信息用于指示下发测试任务。
或在反馈信息为第二反馈信息的情况下,将测试任务输入至任务队列的尾端,其中,第二反馈信息用于指示测试任务下发失败;从任务队列的首端读取新的测试任务;将新的测试任务作为测试任务,返回执行获取测试任务对应的目标设备类型的步骤。
在反馈信息为第二反馈信息的情况下,将测试任务输入至任务队列的尾端,从任务队列的首端读取新的测试任务,再对新的测试任务执行获取目标设备类型的步骤。在测试任务下发失败后,及时将测试任务输入至任务队列的尾端,不影响测试任务的下一次下发及测试,而且,及时从任务队列的首端读取新的测试任务,能够使多个测试任务并行处理,提升多个 测试任务的测试效率。
在一示例性的实施例中,测试内容为对被测设备的CPU相关功能进行测试。如图6所示,测试系统包括一台测试服务器和四个被测设备,测试服务器和被测设备之间通过串口链接。测试服务器为x86服务器,四个被测设备的设备名称分别为thunderx、upsquar、tegra和sharkl,其中,被测设备thunderx和upsquar的CPU都属于cortex-M2类型设备,被测设备tegra和sharkl的CPU都属于cortex-A15类型设备。如图7所示,x86服务器包括测试任务管理器、任务队列和测试任务下发器,设备池cortex-M2包括调度器、被测设备thunderx和被测设备upsquar,被测设备thunderx中设置有测试执行器,被测设备upsquar中设置有测试执行器,设备池cortex-A15包括调度器、被测设备tegra和被测设备sharkl,被测设备tegra中设置有测试执行器,被测设备sharkl中设置有测试执行器。
创建四个测试任务:1、测试cortex-M2的运算性能;2、测试cortex-M2的静态功耗;3、测试cortex-A15的运算性能;4、测试cortex-A15的静态功耗。在测试任务管理器中创建以上四个测试任务后,依次进入到任务队列,测试任务下发器读取队列头测试任务1,解析识别到匹配cortex-M2类型被测设备,从而尝试将此任务发送给cortex-M2设备池的调度器,调度器接到请求后,按照预存的顺序(1.thunderx,2.upsquar)依次遍历设备池中的被测设备,当遍历到thunderx时,就发现此被测设备可执行测试,则接受此任务,此任务也从队列中移出,并将此任务发送给thunderx设备的测试执行器,开始执行自动测试。测试任务下发器发送测试任务成功后,又从测试任务队列查看测试任务2,解析识别到匹配cortex-M2类型被测设备,又再次尝试将此任务发送给cortex-M2设备池的调度器,调度器遍历发现thunderx正在执行测试任务,条件不满足,而upsquar满足测试条件,则接收测试任务并下发到upsquar的测试执行器开始执行测试任务。同理,测试任务3、测试任务4依次被分配到tegra和sharkl设备上。此时四个测试设备都同时在运行不同的测试任务,经过一定时间后各测试执行器依次执行完成测试任务,并各自输出测试结果。
如图8所示,在运行过程中,被测设备thunderx出现了卡死故障,经过1小时周期的健康检查后,设置此设备的健康状态为故障。此时创建一个cortex-M2类型的测试任务,到达cortex-M2设备池的调度器时,轮询发现thunderx设备为故障状态,不可以执行测试任务,则继续轮询upsquar设备,发现此设备状态为健康且处于空闲状态可以执行测试任务,则分配测试任务到此被测设备上执行。而如果没有健康检查功能,调度器会将测试任务分配给空闲且存在故障的thunderx,从而导致测试任务被卡死浪费时间,降低效率,并且测试结果受设备故障影响,失去了参考价值。
在一示例性的实施例中,测试内容为测试应用程序在不同手机系统上的兼容性。如图9所示,一台主PC电脑通过数据线链接一台iOS 15系统的iPhone13,另外通过网络可以访问另外两台链接到从PC的手机iOS15系统的iPhone12和MyOS11系统的ZTE-S30,以及一台直接链接网络的手机MyOS11系统的ZTE-Z40。如图10所示,主PC包括测试任务管理器、任务队列和测试任务下发器,设备池iOS15包括调度器、被测设备iPhone12和被测设备iPhone13,被测设备iPhone12中设置有测试执行器,被测设备iPhone13中设置有测试执行器,设备池MyOS11包括调度器、被测设备ZTE-S30和被测设备ZTE-Z40,被测设备ZTE-S30中设置有测试执行器,被测设备ZTE-Z40中设置有测试执行器。此示例性的实施例主要说明,即使被测设备具有不同的链接方式,仍然可以使用本发明实施例提供的测试方法进行管理,本发明实施例提供的测试方法具有较广泛的使用场景。
综上,本发明实施例提供的上述技术方案与现有技术相比具有如下优点:本发明实施例中,测试系统包括设备池和测试服务器,设备池中包括调度器以及与目标设备类型对应的至少一个被测设备,由于测试任务在多个同类型被测设备上运行的结果没有区别,因此测试任务只需关注设备类型,不用感知具体在哪个被测设备上运行。本发明实施例中,将设备类型相同的至少一个被测设备规划到同一个设备池中进行管理,一个设备池对应着一个设备类型,测试服务器获取测试任务对应的目标设备类型,查找 目标设备类型对应的设备池,并与设备池建立通信链接,进而发送测试任务下发请求至设备池,测试服务器不需要将测试任务下发至具体的被测设备,只需要找到目标设备类型对应的设备池,发送测试任务下发请求至设备池,再由设备池中的调度器确认是否存在满足测试任务执行条件的目标被测设备,获得确认结果,并根据确认结果发送反馈信息至测试服务器。
本发明实施例中,测试服务器发送测试任务下发请求至目标设备类型对应的设备池,由设备池中的调度器查找能够执行测试任务的目标被测设备,测试服务器将测试任务下发至设备池后,在设备池中的目标被测设备执行测试任务的同时,测试服务器能够继续发送新的测试任务下发请求至设备池,设备池中的各个被测设备能够同时执行不同的测试任务,能够并行地执行测试任务,能够大幅提升被测设备的利用率,缩短多个测试任务总的执行时长,解决了多个测试任务执行效率低的问题。而且,只需要一台测试服务器就可以实现,成本低廉且使用灵活度也很高。
本发明的实施例还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。在一示例性实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本发明的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
显然,本领域的技术人员应该明白,上述的本发明实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描 述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明实施例不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明实施例,对于本领域的技术人员来说,本发明实施例可以有各种更改和变化。凡在本发明实施例的原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明实施例的保护范围之内。

Claims (10)

  1. 一种测试方法,应用于设备池,所述设备池中包括调度器以及与目标设备类型对应的至少一个被测设备,包括:
    获取测试服务器发送的测试任务下发请求,其中,所述测试任务下发请求中包括测试任务所需要的目标设备类型;
    确认所述设备池中是否存在满足测试任务执行条件的目标被测设备,获得确认结果;
    根据所述确认结果,发送反馈信息至所述测试服务器。
  2. 根据权利要求1所述的测试方法,其中,所述根据所述确认结果,发送反馈信息至所述测试服务器,包括:
    在所述设备池中存在所述目标被测设备的情况下,发送第一反馈信息至所述测试服务器,其中,所述第一反馈信息用于指示下发所述测试任务;或
    在所述设备池中不存在所述目标被测设备的情况下,发送第二反馈信息至所述测试服务器,其中,所述第二反馈信息用于指示所述测试任务下发失败。
  3. 根据权利要求2所述的测试方法,其中,所述发送第一反馈信息至所述测试服务器之后,所述方法还包括:
    获取所述测试服务器发送的所述测试任务;
    将所述测试任务发送至所述目标被测设备,由所述目标被测设备执行所述测试任务。
  4. 根据权利要求1所述的测试方法,其中,所述确认所述设备池中是否存在满足测试任务执行条件的目标被测设备,获得确认结果,包括:
    按照预设顺序,轮询所述设备池中的被测设备;
    其中,在轮询过程中,判断轮询到的被测设备是否满足所述测试任务执行条件;
    在所述轮询到的被测设备满足所述测试任务执行条件的情况下,停止 所述按照预设顺序,轮询所述设备池中的被测设备的步骤,并将所述轮询到的被测设备作为所述目标被测设备;或
    在所述轮询到的被测设备不满足所述测试任务执行条件的情况下,返回执行所述按照预设顺序,轮询所述设备池中的被测设备的步骤。
  5. 根据权利要求1至4中任一项所述的测试方法,其中,所述测试任务执行条件包括被测设备无故障,且被测设备处于空闲状态,没有正在执行测试任务。
  6. 根据权利要求5所述的测试方法,其中,所述方法还包括:
    每隔预设时长,所述被测设备执行自检测试任务,获得执行结果;
    在所述执行结果为所述自检测试任务执行成功的情况下,确定所述被测设备无故障;或
    在所述执行结果为所述自检测试任务执行失败的情况下,确定所述被测设备存在故障。
  7. 一种测试方法,应用于测试服务器,包括:
    从任务队列的首端读取测试任务;
    获取所述测试任务对应的目标设备类型;
    查找所述目标设备类型对应的设备池,并与所述设备池建立通信链接,其中,所述设备池中包括与所述目标设备类型对应的至少一个被测设备;
    发送测试任务下发请求至所述设备池,其中,所述测试任务下发请求中包括所述测试任务所需要的所述目标设备类型;
    获取所述设备池发送的反馈信息。
  8. 根据权利要求7所述的测试方法,其中,所述获取所述设备池发送的反馈信息之后,所述方法还包括:
    在所述反馈信息为第一反馈信息的情况下,发送所述测试任务至所述设备池,其中,所述第一反馈信息用于指示下发所述测试任务;或
    在所述反馈信息为第二反馈信息的情况下,将所述测试任务输入至所述任务队列的尾端,其中,所述第二反馈信息用于指示所述测试任务下发失败;从所述任务队列的首端读取新的测试任务;将所述新的测试任务作为所述测试任务,返回执行所述获取所述测试任务对应的目标设备类型的步骤。
  9. 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至6中任一项所述的测试方法,或者,执行所述权利要求7至8中任一项所述的测试方法。
  10. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至6中任一项所述的测试方法,或者,执行所述权利要求7至8中任一项所述的测试方法。
PCT/CN2023/081741 2022-03-29 2023-03-15 测试方法、存储介质及电子装置 WO2023185482A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210322186.7 2022-03-29
CN202210322186.7A CN116974878A (zh) 2022-03-29 2022-03-29 测试方法、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2023185482A1 true WO2023185482A1 (zh) 2023-10-05

Family

ID=88199059

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081741 WO2023185482A1 (zh) 2022-03-29 2023-03-15 测试方法、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN116974878A (zh)
WO (1) WO2023185482A1 (zh)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307763A1 (en) * 2008-06-05 2009-12-10 Fiberlink Communications Corporation Automated Test Management System and Method
CN105117289A (zh) * 2015-09-30 2015-12-02 北京奇虎科技有限公司 基于云测试平台的任务分配方法、装置及系统
CN105279017A (zh) * 2015-09-30 2016-01-27 北京奇虎科技有限公司 基于云测试平台的任务分配方法、装置及系统
CN105573902A (zh) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 一种应用程序的测试方法及系统
CN108616424A (zh) * 2018-04-26 2018-10-02 新华三技术有限公司 一种资源调度方法、计算机设备和系统
CN111190810A (zh) * 2019-08-26 2020-05-22 腾讯科技(深圳)有限公司 执行测试任务的方法、装置、服务器和存储介质
CN111488268A (zh) * 2019-01-25 2020-08-04 北京京东尚科信息技术有限公司 自动化测试的调度方法和调度装置
CN111913884A (zh) * 2020-07-30 2020-11-10 百度在线网络技术(北京)有限公司 分布式测试方法、装置、设备、系统和可读存储介质
CN112069082A (zh) * 2020-09-29 2020-12-11 智慧互通科技有限公司 一种自动化测试方法及系统
CN113051179A (zh) * 2021-04-27 2021-06-29 思享智汇(海南)科技有限责任公司 一种自动化测试方法、系统及存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307763A1 (en) * 2008-06-05 2009-12-10 Fiberlink Communications Corporation Automated Test Management System and Method
CN105573902A (zh) * 2014-10-10 2016-05-11 阿里巴巴集团控股有限公司 一种应用程序的测试方法及系统
CN105117289A (zh) * 2015-09-30 2015-12-02 北京奇虎科技有限公司 基于云测试平台的任务分配方法、装置及系统
CN105279017A (zh) * 2015-09-30 2016-01-27 北京奇虎科技有限公司 基于云测试平台的任务分配方法、装置及系统
CN108616424A (zh) * 2018-04-26 2018-10-02 新华三技术有限公司 一种资源调度方法、计算机设备和系统
CN111488268A (zh) * 2019-01-25 2020-08-04 北京京东尚科信息技术有限公司 自动化测试的调度方法和调度装置
CN111190810A (zh) * 2019-08-26 2020-05-22 腾讯科技(深圳)有限公司 执行测试任务的方法、装置、服务器和存储介质
CN111913884A (zh) * 2020-07-30 2020-11-10 百度在线网络技术(北京)有限公司 分布式测试方法、装置、设备、系统和可读存储介质
CN112069082A (zh) * 2020-09-29 2020-12-11 智慧互通科技有限公司 一种自动化测试方法及系统
CN113051179A (zh) * 2021-04-27 2021-06-29 思享智汇(海南)科技有限责任公司 一种自动化测试方法、系统及存储介质

Also Published As

Publication number Publication date
CN116974878A (zh) 2023-10-31

Similar Documents

Publication Publication Date Title
US8549522B1 (en) Automated testing environment framework for testing data storage systems
US7392148B2 (en) Heterogeneous multipath path network test system
US10282279B2 (en) System and method for diagnosing information technology systems in multiple virtual parallel universes
US20120144236A1 (en) System and method for diagnosing information technology systems in multiple virtual parallel universes
US20140089900A1 (en) System and method for providing an implementation accelerator and regression testing framework for use with environments such as fusion applications
US7934199B2 (en) Automated operation of IT resources with multiple choice configuration
US20150100831A1 (en) Method and system for selecting and executing test scripts
CN112650676A (zh) 软件测试方法、装置、设备及存储介质
CN111258591B (zh) 程序部署任务执行方法、装置、计算机设备和存储介质
US20190087251A1 (en) Runtime failure detection and correction
US7673178B2 (en) Break and optional hold on failure
US20030051127A1 (en) Method of booting electronic apparatus, electronic apparatus and program
CN110535671B (zh) 云平台的管理方法及装置
US9612944B2 (en) Method and system for verifying scenario based test selection, execution and reporting
WO2023185482A1 (zh) 测试方法、存储介质及电子装置
US7882399B2 (en) Intelligent job functionality
CN107992420B (zh) 提测项目的管理方法及系统
US9996870B2 (en) Method, system, and computer readable medium for utilizing job control orders in an order management system
CN112181443B (zh) 服务的自动化部署方法、装置及电子设备
EP4127939B1 (en) Architecture, method and system for live testing in a production environment
CN110570276B (zh) 一种服务器自适应匹配用户需求的匹配系统及方法
JP2022076637A (ja) 自動化システム、サーバ、自動化方法及びコンピュータプログラム
JP2021089587A (ja) ツール実行プログラム,情報処理装置及びツール実行方法
CN114115975A (zh) 一种基于Git标签版本的CICD流程控制方法及装置
CN117082546A (zh) 一种核心网网元容灾处理方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777857

Country of ref document: EP

Kind code of ref document: A1