CN117909226A - Algorithm testing method, equipment and readable storage medium - Google Patents
Algorithm testing method, equipment and readable storage medium Download PDFInfo
- Publication number
- CN117909226A CN117909226A CN202311844343.1A CN202311844343A CN117909226A CN 117909226 A CN117909226 A CN 117909226A CN 202311844343 A CN202311844343 A CN 202311844343A CN 117909226 A CN117909226 A CN 117909226A
- Authority
- CN
- China
- Prior art keywords
- test
- ftl
- simulation
- instruction
- flash memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 281
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 84
- 238000003860 storage Methods 0.000 title claims abstract description 32
- 238000004088 simulation Methods 0.000 claims abstract description 77
- 238000000034 method Methods 0.000 claims abstract description 42
- 230000008569 process Effects 0.000 claims abstract description 27
- 239000002245 particle Substances 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims description 30
- 238000013515 script Methods 0.000 claims description 22
- 230000007246 mechanism Effects 0.000 claims description 19
- 230000005540 biological transmission Effects 0.000 claims description 16
- 238000012544 monitoring process Methods 0.000 claims description 13
- 239000008187 granular material Substances 0.000 claims description 11
- 238000010998 test method Methods 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 4
- 239000007787 solid Substances 0.000 description 20
- 238000004891 communication Methods 0.000 description 10
- 238000011056 performance test Methods 0.000 description 8
- 230000002159 abnormal effect Effects 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000036541 health Effects 0.000 description 4
- 239000010410 layer Substances 0.000 description 4
- 238000007726 management method Methods 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3696—Methods or tools to render software testable
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3664—Environments for testing or debugging software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3676—Test management for coverage analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3692—Test management for test results analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Test And Diagnosis Of Digital Computers (AREA)
Abstract
The application discloses an algorithm testing method, algorithm testing equipment and a readable storage medium, and belongs to the technical field of hard disk simulation. The application provides a distributed hard disk simulation system, which comprises a plurality of nodes, wherein the nodes are used for independently running a simulation host, an FTL (flash field programmable logic) and a virtual flash memory, and when the FTL receives a test instruction, the test instruction is converted into a flash memory particle operation instruction; sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction; and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host. The method can decouple all the simulation components in the hard disk simulation software, and if a problem occurs at a node where one simulation component is located, the normal operation of other simulation components cannot be affected, so that the whole test process is affected.
Description
Technical Field
The present application relates to the field of hard disk simulation technologies, and in particular, to an algorithm testing method, an algorithm testing device, and a readable storage medium.
Background
Developing control algorithms on solid state storage devices typically requires that the designed algorithms be implemented as software and integrated into the controller or firmware of the finished solid state storage device for testing and verification. That is, if the finished solid state storage device is not yet produced, it is difficult to develop the control algorithm on the solid state storage device. At present, the performance and behavior of a control algorithm are usually observed by simulating a hard disk (SSD, solid STATE DISK) and integrating the written control algorithm into hard disk simulation software to perform testing in a simulation environment.
But unexpected reasons such as excessive load of a network card, a router, a machine room, a CPU, memory overflow, natural disasters and the like can cause the computer running the hard disk simulation software to crash and lose all data in the test process. And because of the blue screen after the computer crashes, the on-site observation and debugging cannot be performed. This situation can have a serious impact on the testing and development of algorithms.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present application and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The application mainly aims to provide an algorithm testing method, algorithm testing equipment and a readable storage medium, and aims to solve the technical problem of test data loss caused by computer crash of running hard disk simulation software.
In order to achieve the above object, the present application provides an algorithm testing method applied to a distributed hard disk simulation platform, wherein the distributed hard disk simulation platform includes a plurality of nodes, the nodes are used for independently running a simulation host, an FTL and a virtual flash memory, and the algorithm testing method includes the following steps:
When the FTL receives a test instruction, converting the test instruction into a flash memory particle operation instruction;
Sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction;
and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host.
Optionally, the distributed hard disk simulation platform further includes a node, where the node is configured to independently operate the virtual micro control unit, and before the step of converting the test instruction into the flash memory granule operation instruction when the FTL receives the test instruction, the method further includes:
when the virtual micro control unit receives a command stream, analyzing the command stream;
generating a test instruction according to the analysis result;
And sending the test instruction to the FTL.
Optionally, before the step of converting the test instruction into the flash granule operation instruction when the FTL receives the test instruction, the method further includes:
acquiring preset parameters and test tasks of the virtual flash memory, and formulating test cases according to the preset parameters and the test tasks;
Writing a test script corresponding to the test case, wherein the test script is used for generating a command stream, and the command stream comprises read-write data, an erasing block and a simulation bad block;
and running the test script on the simulation host, and sending the command stream to a destination node, wherein the destination node comprises the virtual micro control unit or the FTL.
Optionally, before the step of obtaining the preset parameters and the test tasks of the virtual flash memory and preparing the test case according to the preset parameters and the test tasks, the method further includes:
Acquiring an application scene of the FTL, wherein the application scene comprises scene parameters and/or scene names, and the scene parameters comprise at least one of data transmission quantity, continuous writing duration, data updating frequency and continuous reading and writing duration;
And acquiring a test task mapping table, and determining a test task according to the test task mapping table and the application scene.
Optionally, the step of sending the flash granule operation instruction to the virtual flash to enable the virtual flash to execute the operation corresponding to the test instruction includes:
when the application scene comprises the scene name, acquiring a scene name-parameter mapping table;
And determining scene parameters corresponding to the scene names according to the scene names and the scene name-parameter mapping table.
Optionally, the step of sending the flash granule operation instruction to the virtual flash to enable the virtual flash to execute the operation corresponding to the test instruction includes:
After the operation is monitored to be executed, test data and/or logs generated in the test process are obtained;
Determining performance parameters of the FTL according to the test data and/or the log;
and judging whether the test result of the FTL passes or not by detecting whether the performance parameter accords with an expected result or not.
Optionally, the algorithm testing method further includes:
Detecting a node fault except the FTL, and determining simulation hardware associated with the fault node;
Determining a target node through a load balancing mechanism, wherein the target node is the node with the lightest load;
The emulation hardware is run on the target node to avoid interrupting the test procedure of the FTL.
Optionally, the algorithm testing method further includes:
monitoring operation indexes of the simulation host, wherein the operation indexes comprise at least one of flow increment, performance parameters and load states;
And when the operation index is monitored to exceed a preset threshold value, sending out a capacity expansion alarm.
In addition, to achieve the above object, the present application also provides an algorithm testing apparatus, the apparatus comprising: the system comprises a memory, a processor and an algorithm test program stored on the memory and capable of running on the processor, wherein the algorithm test program is configured to realize the steps of the algorithm test method.
In addition, in order to achieve the above object, the present application also provides a readable storage medium having stored thereon an algorithm test program which, when executed by a processor, implements the steps of the algorithm test method described above.
In order to solve the technical problem of test data loss caused by computer crash of running hard disk simulation software, the application provides a distributed hard disk simulation system, which comprises a plurality of nodes, wherein the nodes are used for independently running a simulation host, an FTL (flash memory) and a virtual flash memory, and when the FTL receives a test instruction, the test instruction is converted into a flash memory particle operation instruction; sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction; and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host. The method can decouple all the simulation components in the hard disk simulation software, and if a problem occurs at the PC (personal computer ) end of one simulation component, the normal operation of other simulation components can not be influenced, so that the whole test process is influenced.
Drawings
FIG. 1 is a hardware logic structure diagram of a solid state disk in the prior art;
FIG. 2 is a flow chart of a first embodiment of the algorithm testing method of the present application;
FIG. 3 is a flow chart of a second embodiment of the algorithm testing method of the present application;
FIG. 4 is a flow chart of a third embodiment of the algorithm testing method of the present application;
fig. 5 is a schematic structural diagram of an algorithm testing device of a hardware running environment according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Developing control algorithms on solid state storage devices typically requires that the designed algorithms be implemented as software and integrated into the controller or firmware of the finished solid state storage device for testing and verification. That is, if the finished solid state storage device is not yet produced, it is difficult to develop the control algorithm on the solid state storage device.
Taking a common Solid state disk (SSD, solid STATE DISK) as an example, referring to fig. 1, fig. 1 is a hardware logic structure diagram of the Solid state disk in the prior art. A solid state disk is typically composed of a host interface logic unit (Host Interface Logic), RAM (cache), a host control chip (SSD controller), and multiple flash memory chips. The host interface shields the operation of the flash memory chip in the solid state disk and helps the storage system to seamlessly read and write the solid state disk. The SSD firmware is operated by an embedded processor in the main control chip and integrates a storage address space, a flash memory physical space, garbage collection, wear balance and the like which are visible to a management host; the flash memory chips are formed by bus parallel cascade connection, and the organization mode can provide parallel access. The RAM inside the solid state disk plays a role in buffering and accelerating (some SSDs do not have this RAM). At present, the performance and behavior of a control algorithm are usually observed by simulating a hard disk, namely, software-constructing each hardware structure of the hard disk, so that a written control algorithm is integrated into hard disk simulation software to perform testing in a simulation environment.
But unexpected reasons such as excessive load of a network card, a router, a machine room, a CPU, memory overflow, natural disasters and the like can cause the computer running the hard disk simulation software to crash and lose all data in the test process. And because of the blue screen after the computer crashes, the on-site observation and debugging cannot be performed. This situation can have a serious impact on the testing and development of algorithms.
In order to solve the above problems, the present application provides a distributed hard disk simulation system, which includes a plurality of nodes, wherein the nodes are used for independently running a simulation host, an FTL and a virtual flash memory, and when the FTL receives a test instruction, the test instruction is converted into a flash memory granule operation instruction; sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction; and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host. The method can decouple all the simulation components in the hard disk simulation software, and if a problem occurs at the PC (personal computer ) end of one simulation component, the normal operation of other simulation components can not be influenced, so that the whole test process is influenced.
In order that the above-described aspects may be better understood, exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
An embodiment of the present application provides an algorithm testing method, referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the algorithm testing method of the present application.
In this embodiment, the algorithm testing method includes:
Step S10: when the FTL receives a test instruction, converting the test instruction into a flash memory particle operation instruction;
The application is applied to a distributed hard disk simulation system, which comprises a plurality of nodes, wherein each node can be in different physical spaces and establish connection through a communication protocol. When the FTL receives a test instruction comprising information such as a logic address, data content, the position of an erasing block and the like, determining an address to be operated of the virtual flash memory according to a preset mapping rule, generating a flash grain operation instruction according to the test instruction and the address to be operated, and updating a logic address-virtual flash memory address mapping table. The address to be operated comprises a writing address, a reading address, an erasing address and the like according to the specific content of the test instruction.
As an alternative implementation of the node arrangement, the distributed hard disk emulation system includes three nodes for independently running an emulation host (sim-host, emulation platform-host), an FTL (Flash Translation Layer ) and a virtual flash (V-flash), respectively. The simulation host is used for running the test script and issuing a command stream. Specifically, the simulation host comprises an excitation generator, a log debugging record module, a data management module, a visual interface and the like. The stimulus generator comprises a test script engine and a virtual host driver, and is used for analyzing the script flow and converting the script flow into an actual command flow and an actual data flow. The FTL node includes FTL, virtual microcontroller unit (V-MCU) and log debug record module. A virtual microcontroller unit (V-MCU) is capable of retrieving test instructions from the command stream for the FTL to translate the test instructions into flash granule operation instructions.
As an alternative implementation of another node arrangement, the distributed hard disk emulation system includes four nodes for independently running an emulation host (sim-host, emulation platform-host), a virtual microcontroller unit (V-MCU), an FTL (Flash Translation Layer ), and a virtual flash (V-flash), respectively. The virtual micro controller unit simulates the functions of a processor in a main control chip of the hard disk, and includes analyzing a command stream of the simulation host to determine operations to be executed, such as reading data, writing data, erasing blocks and the like; generating a test instruction according to the analysis result, wherein the test instruction comprises information such as a logic address, data content, the position of an erasing block and the like; and sending the test instruction to the FTL. The operation of other nodes is identical to that of the previous embodiment, and the present application is not described here.
As an alternative implementation mode for establishing connection of each node, if the distributed hard disk simulation system is provided with a central controller, the central controller is used as a control node in the distributed hard disk simulation system and is responsible for managing communication and data transmission among each node. When other nodes need to establish connection with the central controller, each node sends a connection request packet to the central controller, and after the central controller receives the request, the central controller replies a confirmation message to indicate that the request has been received. The other nodes will then wait for an acknowledgement message from the central controller until an acknowledgement message is received. Once the central controller acknowledges success, connections are established between the other nodes and the central controller. During the connection, the nodes may exchange information with each other and cooperate.
As another alternative implementation mode for establishing connection of each node, if the distributed hard disk simulation system does not have a central controller, each node is mutually independent, and no fixed control node exists. Thus, communication between nodes needs to be performed by broadcast or multicast. Specifically, each node sends a broadcast message to other nodes in the distributed hard disk simulation system, where the broadcast message includes information such as its own ID and version number. After receiving the broadcast message, other nodes analyze the message and check the validity of the message. If the message is valid, the other nodes reply to the acknowledgement message, indicating that the message has been received. The sender node will then wait for an acknowledgement message from the receiver node until an acknowledgement message is received. Once the receiver node receives the message from the sender node and acknowledges the success, a connection is established between the two nodes. During the connection, the two nodes may exchange information with each other and cooperate. If a node loses connection during the connection, the session will automatically terminate and the node will attempt to reestablish the connection.
Step S20: sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction;
In the distributed hard disk simulation system, the FTL and the virtual flash memory are respectively and independently operated on different nodes, and in order to carry out FTL algorithm test through the cooperation of the nodes, the FTL algorithm test is usually completed by a centralized scheduler or a distributed algorithm.
As an alternative to instruction transmission, instructions are distributed or tasks are distributed by a central controller. The central controller refers to an independent node or service, and is responsible for the decomposition and distribution of tasks in the whole distributed system. All nodes register their own node information with the central controller.
Illustratively, the FTL sends flash grain operation instructions to the central controller, which forwards the flash grain operation instructions to the virtual flash memory.
As an alternative embodiment of instruction transmission, the task decomposition and distribution process is completed through cooperative computing. Such distributed algorithms are typically based on some distributed computing framework or protocol, such as MapReduce, spark, MPI, etc.
Illustratively, the node where the FTL is located sends the flash memory granule operation instruction to the virtual flash memory.
As an alternative embodiment of the transmission of instructions, the decomposition and distribution of tasks is accomplished by a combination of a central controller and distributed algorithms. The central controller is responsible for task allocation and node state management of the whole system, and the distributed algorithm is responsible for specific task calculation and node cooperation. This hybrid approach allows the advantages of both centralized schedulers and distributed algorithms to be taken into account, while avoiding their drawbacks.
In addition, because the virtual flash memory is simulated in the simulation environment, the virtual flash memory has no direct corresponding relation with the physical address of the actual flash memory. Therefore, the virtual flash memory needs to maintain the mapping table itself and update the mapping table when executing the flash granule operation instruction. Synchronization is needed between the mapping table of the virtual flash memory and the mapping table of the FTL to ensure that the mapping relationship between the two is consistent.
Step S30: and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host.
In performing the test, test data and logs generated during the test are collected, and the FTL performance, stability and correctness are analyzed and evaluated. In this embodiment, if the FTL is abnormal due to some illegal operations, such as accessing an irregular address, and a program problem such as overflow of a variable, the distributed hard disk simulation system of the present application has the advantage that a normal node, such as a node where a simulation host is located, or a log and an abnormal recording module are operated in a single node, so that the abnormal node can be recorded, and the analysis after the fact is facilitated.
As an alternative to storing test data and logs generated during testing, a monitoring program is run on the node where the emulated host resides, to collect and analyze test data and logs from the virtual flash and FTL.
As another alternative implementation mode for storing test data and logs generated in the test process, after the virtual flash memory executes the flash grain operation instruction, the execution result is fed back to the FTL, and the FTL sends the instruction result and the log data of the operation to the node where the simulation host is located.
Optionally, for the distributed hard disk simulation system with the virtual micro control unit, the FTL may also send the instruction result and log data of its own operation to the virtual micro control unit, and the virtual micro control unit forwards the data to a node where the simulation host is located. The method has the advantages that the management and the monitoring of the distributed system can be better realized, and the expandability and the maintainability of the system are improved. Meanwhile, the mode can realize data sharing and communication among different nodes, and improves the operation efficiency and performance of the system.
In addition, because each node of the distributed hard disk simulation system operates independently, each node can perform different simulation experiments at the same time. The experimental data generated by the method can be stored in the node where the simulation host is located, and the experimental data can comprise error data generated in the FTL algorithm test process, experimental data generated by the virtual flash memory node aging test and the like.
Due to the large amount of data storage carried by the emulated host node, a situation of insufficient capacity is highly likely to occur. The application monitors the operation index of the simulation host, wherein the operation index comprises at least one of flow increment, performance parameter and load state; and when the operation index is monitored to exceed a preset threshold value, sending out a capacity expansion alarm.
For example, if the performance of the simulation host begins to decrease, when the response time is monitored to exceed the preset time, the throughput decrease amount exceeds the preset variable, etc., it is indicated that the hardware resource of the node where the current simulation host is located cannot meet the requirement, and a capacity expansion alarm is issued.
Since expansion requires initializing the hard disk and restarting the computer. Therefore, it is troublesome for the conventional hard disk simulation system to expand the capacity. However, for the distributed hard disk simulation system of the present application, each node can be independently expanded without affecting the testing process of other nodes. In this way, the whole system can be prevented from being restarted and initialized, thereby saving time and cost.
After the completion of the test is monitored, the node where the FTL is located or the central controller acquires test data and/or logs generated in the test process, wherein the data and logs may include information such as delay time of read-write operation, data transmission rate, time consumption of erasing blocks, and processing condition of simulating bad blocks. By analyzing the test data and/or logs, performance parameters of the FTL may be determined, including read-write latency, throughput, erasure performance, bad block handling performance, etc. For example, it is detected whether the FTL has reached performance metrics such as expected response time and throughput when executing test instructions. And judging whether the test result of the FTL passes or not according to the test data and/or the performance parameters recorded in the log. If the performance parameters meet the expected results, the test results of the FTL pass; otherwise, if the performance parameter does not meet the expected result, the test result of the FTL is not passed. The expected result may be a performance index set in accordance with the application scenario and test task preparation, or may be a result estimated from known hardware specifications and performance requirements.
In this embodiment, a distributed hard disk simulation system is provided, including a plurality of nodes, where the nodes are configured to independently operate a simulation host, an FTL, and a virtual flash memory, and when the FTL receives a test instruction, the test instruction is converted into a flash memory granule operation instruction; sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction; and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host. The method can decouple all the simulation components in the hard disk simulation software, and if a problem occurs at the PC (personal computer ) end of one simulation component, the normal operation of other simulation components can not be influenced, so that the whole test process is influenced.
Further, referring to fig. 3, fig. 3 is a flow chart of a second embodiment of the algorithm testing method according to the present application, and a second embodiment of the present application is provided, where the step S10 includes:
Step S00: acquiring preset parameters and test tasks of the virtual flash memory, and formulating test cases according to the preset parameters and the test tasks;
Different types of solid State Storage Devices (SSDs) are limited by the manufacturing process and application scenario of their own flash memory particles, and may be expected differently for the FTL algorithm used.
For example, if the solid-state memory device of the present application is an SLC (single-layer cell) SSD, the SLC flash memory stores one data bit in a single bit, and thus has a faster read-write speed and a longer lifetime. For SLC flash, FTL algorithms are expected to be simpler and more efficient because each memory cell stores less data.
For example, if the solid state memory device of the present application is an MLC (multi-level cell) SSD, each cell of the MLC flash memory may store multiple bits, and thus have a higher storage density but may have a relatively lower read/write speed and lifetime than the SLC flash memory. FTL algorithms are expected to be able to manage the storage and erasure operations of multiple bits more complex.
For example, if the solid state storage device of the present application is a TLC (three layer cell) SSD, TLC flash memory storage density is higher, but read-write speed and lifetime are lower than MLC. FTL algorithms are expected to manage the storage and erasure operations of multiple bits more complex, as well as more error correction and lifetime management.
For example, if the solid-state storage device of the present application is a QLC (four-layer cell) SSD, the QLC flash memory has a higher storage density, but the read-write speed and lifetime are relatively low. FTL algorithms are expected to manage more bits more complex due to higher storage densities.
In addition to determining the flash memory type according to preset parameters of the virtual flash memory, a test task needs to be acquired.
As an alternative implementation manner for obtaining the test task, obtaining a requirement document, wherein the mapping relation between the virtual flash memory and the test task is preset in the requirement document. And determining the test task of the virtual flash memory according to the requirement document.
As another optional implementation manner of obtaining the test task, obtaining an application scene of the FTL, where the application scene includes a scene parameter and/or a scene name, where the scene parameter includes at least one of a data transmission amount, a duration of writing, a data update frequency, a cold data duty ratio, a cold-hot data update ratio, and a duration of continuous reading and writing; and acquiring a test task mapping table, and determining a test task according to the test task mapping table and the application scene.
Illustratively, the length of data written in the test task is determined based on the amount of data traffic that the FTL preset by the developer may be applied to. For example, when the data transfer amount is smaller than the first preset transfer amount, determining that the data writing length is a first preset value (4K); when the data transmission amount is larger than or equal to the first preset transmission amount and smaller than the second preset transmission amount, determining that the data writing length is between a first preset value (4K) and a second preset value (64K); when the data transmission amount is greater than or equal to a second preset transmission amount, the data writing length is determined to be a second preset value (64K). The first preset transmission amount is smaller than the second preset transmission amount, and the first preset value is smaller than the second preset value.
Illustratively, the duty cycle of sequential and random writing in the test task is determined according to the data update frequency that the FTL preset by the developer may be applied to. For example, when the data update frequency is less than a first preset frequency, determining that the sequentially written duty ratio is above a first preset percentage (sixty percent); when the data updating frequency is larger than or equal to the first preset frequency and smaller than the second preset frequency, determining that the sequentially written duty ratio is between a first preset percentage (sixty percent) and a second preset percentage (forty percent); when the data update frequency is greater than or equal to the second preset transfer amount, the sequential write duty ratio is determined to be less than or equal to a second preset value (forty percent).
When the application scenario includes the scenario name, as an optional implementation manner for determining the test task, the test task mapping table includes a scenario parameter-test task mapping table and a scenario name-test task mapping table, and the test task of the scenario name is determined according to the scenario name-test task mapping table.
The scenario name is an application scenario of recording the automobile log, and the determined test tasks include a random write performance test task, a power failure fault tolerance test task, a concurrent read-write test task and a continuous write test task according to the scenario name-test task mapping table. The random writing performance test task is used for simulating random writing of automobile sensor data, such as real-time recording of data of simulated acceleration, vehicle speed, steering angle and the like. The FTL performance in processing real-time data recording is evaluated by random write tests under different loads. The power failure fault tolerance test task is used for simulating the power failure condition of the automobile and testing the data consistency and the recovery capability of the FTL after power failure. This may include modeling the data recovery speed and data integrity after power down. The concurrent read-write test task is used for simulating the condition that a plurality of sensors simultaneously record and read data so as to test the performance of the FTL under the concurrent read-write operation. The continuous writing test task is used for simulating the condition of continuously writing data for a long time, such as simulating data record during long distance driving of an automobile. In this way the stability and performance of the FTL under high load writing over long periods of time can be tested.
Illustratively, the scene name is a player application scene, and the determined test tasks include a continuous read performance test task, a random read performance test task, a cache performance test and a power failure fault tolerance test task according to the scene name-test task mapping table. The continuous reading performance test task is used for simulating the continuous reading of video data from the flash memory for playing, and testing the performance of the FTL when processing continuous reading requests, including reading speed and stability. The random read performance test task is to simulate the read performance of a player in the case of random access to video files, such as user fast forward, rewind, skip, etc. In this way the FTL can be tested for performance and response speed in handling random read requests. The buffer performance test task is used for simulating the process of decoding and playing the video buffered in the memory, and testing the performance of the FTL under the buffer operation. The power failure fault tolerance test task is used for simulating power failure or other abnormal conditions in the playing process and testing the data consistency and recovery capacity of the FTL under the conditions.
As another alternative implementation of determining the test tasks, the test task mapping table includes a scenario parameter-test task mapping table. In order to determine a test task corresponding to a scene name, a scene name-parameter mapping table is required to be acquired first; and determining scene parameters corresponding to the scene names according to the scene names and the scene name-parameter mapping table.
Illustratively, the scene name includes recording an automobile log application scene, and determining, according to a scene name-parameter mapping table, a data transmission amount smaller than a first threshold, a duration of writing greater than a second threshold, and a data update frequency greater than the first threshold and smaller than the second threshold. The first and second thresholds in this example are intended to abstract that the specification is able to determine the characteristics of parameters in a scene from the scene name.
Further, according to the obtained preset parameters and the obtained test tasks, the embodiment formulates a test case, wherein the test case comprises a case title, a precondition, a test step and an expected result.
If the number of the test tasks is greater than one, formulating the test tasks according to the association relation among the test tasks. For example, if the test tasks are independent of each other, the tasks are combined into one test case to improve the efficiency and accuracy of the test. If the test tasks have a dependency relationship or need to be executed simultaneously, different test tasks can be tested respectively, and the test result and feedback information are recorded in the test case.
Step S01: writing a test script corresponding to the test case, wherein the test script is used for generating a command stream, and the command stream comprises read-write data, an erasing block and a simulation bad block;
step S02: and running the test script on the simulation host, and sending the command stream to a destination node, wherein the destination node comprises the virtual micro control unit or the FTL.
In automated testing, a test script is a series of test code and test logic written by a tester according to test cases. The present embodiment provides an implementation of automatically generating test scripts from test cases, including Rule-driven test (Rule-DRIVEN TESTING), pattern-driven test (Pattern-DRIVEN TESTING), knowledge-driven test (knowledged-DRIVEN TESTING), and the like. Wherein, rule-driven testing is a rule-based testing method that generates test scripts by defining test rules and conditions. The mode driving test is a test method based on a software design mode, and a test script is generated by defining a test mode and a scene. Knowledge driven testing is a testing method based on domain knowledge, and a testing script is generated by defining domain knowledge and a testing scene.
And running the test script on the simulation host, and sending the command stream to a destination node, wherein the destination node comprises the virtual micro control unit or the FTL.
In this embodiment, the acquisition of the application scenario and the test task mapping table may help to formulate a test task, and ensure that the test covers various situations in the actual application scenario, so as to more comprehensively evaluate the performance and stability of the FTL. The test case is prepared and the test script is written, so that the test can cover various functions and performance indexes of the virtual flash memory according to preset parameters and test tasks, and the performance of the FTL is evaluated more accurately. The test script is automatically generated according to the test cases, so that the test efficiency and the test coverage rate can be improved, and the test time and the test cost can be reduced.
Further, referring to fig. 4, fig. 4 is a flow chart of a third embodiment of the algorithm testing method according to the present application, and a third embodiment of the present application is provided, where the step S10 includes:
step S40: detecting a node fault except the FTL, and determining simulation hardware associated with the fault node;
in the distributed system, if a certain node crashes, other nodes cannot immediately know that the node crashes, and whether the nodes still normally run can be timely confirmed through a node crash monitoring mechanism.
As an alternative implementation of crash monitoring, if the crash monitoring mechanism is a heartbeat mechanism (Heartbeat Mechanism), communication between nodes in the distributed system may be maintained by sending heartbeat signals. When a heartbeat signal of a node has not been responded to for a long time, the node may be considered to have crashed or failed.
As another alternative implementation of crash monitoring, if the crash monitoring mechanism is a health check mechanism (HEALTH CHECK MECHANISM), nodes in the distributed system may monitor the status of other nodes by periodically sending health check requests. When a node fails to respond to a health check request, the node may be considered to have crashed or failed.
As another alternative implementation manner of crash monitoring, if the crash monitoring mechanism is a monitoring system (Monitoring System), the monitoring system may be deployed in the distributed system to monitor the operation states of the nodes in real time.
Step S50: determining a target node through a load balancing mechanism, wherein the target node is the node with the lightest load;
Step S60: the emulation hardware is run on the target node to avoid interrupting the test procedure of the FTL.
When a node crashes, the status and execution progress of the unexecuted task in the node need to be determined, so that the task is transferred to other available nodes to continue execution by using a task distribution mechanism (Task Distribution Mechanism), and meanwhile, the abnormal node is closed for abnormal maintenance processing.
As an alternative implementation of a task transfer mechanism, a load balancing mechanism (Load Balancing Mechanism) is used for determining a target node, wherein the target node is the node with the least load, so that simulation tasks which should be born by a crashed node are distributed to the target node to continue to be executed.
As an alternative embodiment of the task transfer mechanism, the task state and execution progress of the crashed node is automatically detected by using an automation tool or script.
As another alternative implementation of the task transfer mechanism, a task tracking mechanism is used to record the execution status and progress of the simulation tasks of each node in order to determine the incomplete simulation tasks after a node crashes.
In this embodiment, it may be ensured that when a node failure occurs, measures may be taken in time to migrate the relevant hardware simulation task to other nodes, so as to avoid interrupting the testing process of the FTL. This helps to ensure continuity and accuracy of the test.
In addition, the embodiment of the application also provides algorithm test equipment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an algorithm testing device of a hardware running environment according to an embodiment of the present application.
As shown in fig. 5, the algorithm testing apparatus may include: a processor 1001, such as a central processing unit (CentralProcessingUnit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a wireless FIdelity (WI-FI) interface). The memory 1005 may be a high-speed random access memory (RandomAccessMemory, RAM) or a stable nonvolatile memory (non-VolatileMemory, NVM), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 5 is not limiting of the algorithm testing apparatus and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components.
As shown in fig. 5, an operating system, a data storage module, a network communication module, a user interface module, and an algorithm test program may be included in the memory 1005 as one type of readable storage medium.
In the algorithm testing device shown in fig. 5, the network interface 1004 is mainly used for data communication with other devices; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the algorithm test apparatus of the present application may be provided in the algorithm test apparatus, and the algorithm test apparatus calls the algorithm test program stored in the memory 1005 through the processor 1001 and executes the algorithm test method provided by the embodiment of the present application.
In addition, the embodiment of the application also provides a readable storage medium.
The readable storage medium of the present application stores an algorithm test program which, when executed by a processor, implements the steps of the algorithm test method described above.
The specific embodiment of the algorithm test program stored in the readable storage medium of the present application executed by the processor is substantially the same as the embodiments of the algorithm test method described above, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an algorithmic test" does not exclude the presence of other identical elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of alternative embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus necessary general purpose hardware platform, but of course also by means of hardware, but in many cases the former is a better alternative embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (10)
1. The algorithm testing method is characterized by being applied to a distributed hard disk simulation platform, wherein the distributed hard disk simulation platform comprises a plurality of nodes, the nodes are used for independently running a simulation host, an FTL and a virtual flash memory, and the algorithm testing method comprises the following steps:
When the FTL receives a test instruction, converting the test instruction into a flash memory particle operation instruction;
Sending the flash memory grain operation instruction to the virtual flash memory so that the virtual flash memory executes the operation corresponding to the test instruction;
and collecting test data and logs generated in the test process, and storing the test data and logs into the simulation host.
2. The method of algorithm testing of claim 1, wherein the distributed hard disk emulation platform further comprises a node for independently running a virtual micro control unit, the step of converting the test instruction into a flash memory granule operation instruction when the FTL receives the test instruction further comprising:
when the virtual micro control unit receives a command stream, analyzing the command stream;
generating a test instruction according to the analysis result;
And sending the test instruction to the FTL.
3. The algorithm testing method of claim 1 or 2, wherein the step of converting the test instruction into a flash granule operation instruction upon receipt of the test instruction by the FTL further comprises:
acquiring preset parameters and test tasks of the virtual flash memory, and formulating test cases according to the preset parameters and the test tasks;
Writing a test script corresponding to the test case, wherein the test script is used for generating a command stream, and the command stream comprises read-write data, an erasing block and a simulation bad block;
and running the test script on the simulation host, and sending the command stream to a destination node, wherein the destination node comprises the virtual micro control unit or the FTL.
4. The algorithm testing method according to claim 3, wherein the step of obtaining the preset parameters and the test tasks of the virtual flash memory and preparing the test case according to the preset parameters and the test tasks further comprises:
Acquiring an application scene of the FTL, wherein the application scene comprises scene parameters and/or scene names, and the scene parameters comprise at least one of data transmission quantity, continuous writing duration, data updating frequency and continuous reading and writing duration;
And acquiring a test task mapping table, and determining a test task according to the test task mapping table and the application scene.
5. The algorithm testing method according to claim 4, wherein the step of sending the flash grain operation instruction to the virtual flash to cause the virtual flash to perform the operation corresponding to the test instruction comprises:
when the application scene comprises the scene name, acquiring a scene name-parameter mapping table;
And determining scene parameters corresponding to the scene names according to the scene names and the scene name-parameter mapping table.
6. The algorithm testing method according to claim 1, wherein the step of sending the flash grain operation instruction to the virtual flash so that the virtual flash performs the operation corresponding to the test instruction comprises:
After the operation is monitored to be executed, test data and/or logs generated in the test process are obtained;
Determining performance parameters of the FTL according to the test data and/or the log;
and judging whether the test result of the FTL passes or not by detecting whether the performance parameter accords with an expected result or not.
7. The algorithm testing method of claim 1, wherein the algorithm testing method further comprises:
Detecting a node fault except the FTL, and determining simulation hardware associated with the fault node;
Determining a target node through a load balancing mechanism, wherein the target node is the node with the lightest load;
The emulation hardware is run on the target node to avoid interrupting the test procedure of the FTL.
8. The algorithm testing method of claim 1, wherein the algorithm testing method further comprises:
monitoring operation indexes of the simulation host, wherein the operation indexes comprise at least one of flow increment, performance parameters and load states;
And when the operation index is monitored to exceed a preset threshold value, sending out a capacity expansion alarm.
9. An algorithm testing apparatus, the apparatus comprising: a memory, a processor and an algorithm test program stored on the memory and executable on the processor, the algorithm test program being configured to implement the steps of the algorithm test method of any one of claims 1 to 8.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon an algorithm test program, which when executed by a processor, implements the steps of the algorithm test method according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311844343.1A CN117909226A (en) | 2023-12-28 | 2023-12-28 | Algorithm testing method, equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311844343.1A CN117909226A (en) | 2023-12-28 | 2023-12-28 | Algorithm testing method, equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117909226A true CN117909226A (en) | 2024-04-19 |
Family
ID=90691525
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311844343.1A Pending CN117909226A (en) | 2023-12-28 | 2023-12-28 | Algorithm testing method, equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117909226A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118173153A (en) * | 2024-05-16 | 2024-06-11 | 山东云海国创云计算装备产业创新中心有限公司 | Bad block management program verification method, product and storage medium |
-
2023
- 2023-12-28 CN CN202311844343.1A patent/CN117909226A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118173153A (en) * | 2024-05-16 | 2024-06-11 | 山东云海国创云计算装备产业创新中心有限公司 | Bad block management program verification method, product and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | A simulation approach to evaluating design decisions in mapreduce setups | |
JP5719930B2 (en) | System test equipment | |
US8225142B2 (en) | Method and system for tracepoint-based fault diagnosis and recovery | |
CN117909226A (en) | Algorithm testing method, equipment and readable storage medium | |
KR20070116237A (en) | System and method for monitoring and reacting to peer-to-peer network metrics | |
CN112162927B (en) | Testing method, medium, device and computing equipment of cloud computing platform | |
CN116405412B (en) | Method and system for verifying cluster effectiveness of simulation server based on chaotic engineering faults | |
CN115685785B (en) | Universal bus model and simulation test method | |
CN114528792A (en) | Chip verification method and device, electronic equipment and storage medium | |
CN111309570A (en) | Pressure testing method, medium, device and computing equipment | |
CN116167310A (en) | Method and device for verifying cache consistency of multi-core processor | |
CN116340076A (en) | Hard disk performance test method, device and medium | |
Horling et al. | Multi-agent system simulation framework | |
CN117909160B (en) | Firmware crash analysis method and device based on Internet of things | |
CN115202944A (en) | Hard disk performance test method, device, equipment and storage medium | |
CN116414722B (en) | Fuzzy test processing method and device, fuzzy test system and storage medium | |
US11334349B2 (en) | Removing feature flag-related codebase from applications | |
CN110750445A (en) | Method, system and equipment for testing high-availability function of YARN component | |
CN113821898A (en) | Random verification method, device, equipment and storage medium of chip subsystem | |
CN116827838A (en) | Micro-service chaos test method and system based on automatic dependency discovery and agent | |
CN116560924A (en) | Performance test method, device, computer equipment and readable storage medium | |
CN115617668A (en) | Compatibility testing method, device and equipment | |
CN113704040A (en) | Microprocessor memory reliability testing method | |
Malik et al. | Automaton: An autonomous coverage-based multiprocessor system verification environment | |
Sundmark et al. | Monitored software components-a novel software engineering approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |