CN110968499A - Optimization method and device for parallel test in machine learning - Google Patents

Optimization method and device for parallel test in machine learning Download PDF

Info

Publication number
CN110968499A
CN110968499A CN201811159379.5A CN201811159379A CN110968499A CN 110968499 A CN110968499 A CN 110968499A CN 201811159379 A CN201811159379 A CN 201811159379A CN 110968499 A CN110968499 A CN 110968499A
Authority
CN
China
Prior art keywords
thread
test
control service
sub
threads
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811159379.5A
Other languages
Chinese (zh)
Inventor
许文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gridsum Technology Co Ltd
Original Assignee
Beijing Gridsum Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gridsum Technology Co Ltd filed Critical Beijing Gridsum Technology Co Ltd
Priority to CN201811159379.5A priority Critical patent/CN110968499A/en
Publication of CN110968499A publication Critical patent/CN110968499A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses an optimization method and device for parallel testing in machine learning, relates to the technical field of machine learning, and aims to solve the problems that in the prior art, when a plurality of threads are tested simultaneously, a plurality of training models need to be loaded, too much memory is occupied, the number of threads in the parallel testing is small, and the efficiency of the parallel testing in the machine learning is low. The method of the invention comprises the following steps: when a testing thread is started, starting a control service thread, wherein the control service thread is used for loading a training model required by the testing thread during testing; triggering the control service thread to generate sub-threads corresponding to the test threads respectively, wherein the plurality of sub-threads share the training model loaded by the control service thread; and testing by using a plurality of the sub-threads. The invention is applied to the parallel test of machine learning.

Description

Optimization method and device for parallel test in machine learning
Technical Field
The invention relates to the technical field of machine learning, in particular to an optimization method and device for parallel testing in machine learning.
Background
Machine learning is applied to a plurality of scenes, wherein the machine learning is widely applied to a data testing scene. In general, in order to speed up the test efficiency of data, a parallel prediction mode is often adopted. Parallel testing refers to the execution of multiple parts or subcomponents within a program in parallel by multiple threads. Compared with the traditional single-thread test execution mode, the multithreading mode has great advantages, wherein the test runtime can be reduced.
At present, when a plurality of threads are tested simultaneously, each thread needs to load a training model, and the loading training model needs to occupy a large amount of memory, so that when a plurality of threads are predicted in parallel, a plurality of training models need to be loaded, a large amount of memory is consumed, and the number of parallel tests is small, so that the parallel test efficiency is low.
Disclosure of Invention
In view of this, the method and the apparatus for optimizing parallel testing in machine learning provided by the present invention mainly aim to overcome the problem that when a plurality of threads are tested simultaneously, a plurality of training models need to be loaded, so that memory consumption is large, the number of threads in parallel testing is small, and further parallel testing efficiency is low, thereby improving parallel testing efficiency in machine learning.
In order to solve the above problems, the present invention mainly provides the following technical solutions:
in a first aspect, the present invention provides a method for optimizing parallel tests in machine learning, the method including:
when a testing thread is started, starting a control service thread, wherein the control service thread is used for loading a training model required by the testing thread during testing;
triggering the control service thread to generate sub-threads corresponding to the test threads respectively, wherein the plurality of sub-threads share the training model loaded by the control service thread;
and testing by using a plurality of the sub-threads.
Optionally, each sub-thread carries identification information corresponding to the test thread, and after the test is performed by using the plurality of sub-threads, the method further includes:
and acquiring the test result of each sub-thread and respectively sending the test result to the test thread corresponding to the identification information in the sub-thread.
Optionally, the test thread and the sub-thread use local socket communication.
Optionally, the method further includes:
detecting whether the child thread under test exists;
if not, triggering the control service thread to stop working.
Optionally, the test thread further carries identification information corresponding to the test unit, and when the test thread is started, starting the control service thread includes:
extracting the control service thread corresponding to the test unit according to the identification information corresponding to the test unit carried by the test thread;
and triggering the control service thread to start and load the training model corresponding to the test unit.
In a second aspect, the present invention provides an apparatus for optimizing parallel tests in machine learning, including:
the starting unit is used for starting a control service thread when a test thread is started, and the control service thread is used for loading a training model required by the test thread during testing;
the generating unit is used for triggering the control service thread to generate sub-threads corresponding to the test threads respectively, and the plurality of sub-threads share the training model loaded by the control service thread;
and the test unit is used for testing by utilizing a plurality of sub threads.
Optionally, each sub-thread carries identification information corresponding to the test thread, and the apparatus further includes:
the acquisition unit is used for acquiring the test result of each sub thread;
and the sending unit is used for sending the test result to the test thread corresponding to the identification information in the sub-thread.
Optionally, the test thread and the sub-thread use local socket communication.
Optionally, the apparatus further comprises:
a detecting unit for detecting whether the sub thread under test exists;
and the stopping unit is used for triggering the control service thread to stop working if the sub-thread under test does not exist.
Optionally, the test thread further carries identification information corresponding to the test unit, and the starting unit includes:
the extraction module is used for extracting the control service thread corresponding to the test unit according to the identification information which is carried by the test thread and corresponds to the test unit;
and the starting module is used for triggering the control service thread to start and load the training model corresponding to the test unit.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a storage medium including a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the above-mentioned optimization method for parallel testing in machine learning.
In order to achieve the above object, according to a fourth aspect of the present invention, there is provided a processor for executing a program, wherein the program executes the above optimization method for parallel testing in machine learning during running.
By the technical scheme, the technical scheme provided by the invention at least has the following advantages:
the invention provides a method and a device for optimizing parallel testing in machine learning. Compared with the prior art, when a plurality of threads are tested in parallel, each thread needs to independently load the training model required by the test, the method can generate the corresponding sub-thread by controlling the service thread, and all the sub-threads share the training model loaded by the control service thread, so that the loading quantity of the training model is greatly reduced, the consumption of the memory is reduced, the thread quantity of the parallel test is increased, and the parallel test efficiency in machine learning is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart illustrating a method for optimizing parallel tests in machine learning according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for optimizing parallel tests in machine learning according to an embodiment of the present invention;
FIG. 3 is a thread communication flow diagram provided by an embodiment of the present invention;
FIG. 4 is a block diagram illustrating an apparatus for optimizing parallel tests in machine learning according to an embodiment of the present invention;
fig. 5 is a block diagram illustrating another apparatus for optimizing parallel tests in machine learning according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides an optimization method for parallel testing in machine learning, as shown in fig. 1, the method comprises the following steps:
101. when the test thread starts, the control service thread is started.
And the control service thread is used for loading a training model required by the test thread during testing. And the control service thread is a thread corresponding to the test thread based on a Client-Server Client/service structure. In addition, the training model in this step may be created and stored in advance, so that only loading and use are required during testing.
For the embodiment of the present invention, the specific application scenario may be, but is not limited to, a Linux operating system. It should be noted that, for one test unit, when any one of the test threads is started, the control service thread corresponding to the test thread may be started, and the training model required for the test is loaded through the control service thread. The specific manner of starting the control service thread may be the manner of starting the thread in the prior art, such as by calling the StartService function, but is not limited thereto.
102. And triggering the control service thread to generate sub-threads corresponding to the test threads respectively.
Wherein the plurality of sub-threads share the training model that controls the loading of the service thread. The specific implementation of this step may be to generate sub-threads corresponding to each test thread by calling the fork function.
It should be noted that, when any one of the test threads of the test unit is started, the control service thread may be triggered to generate a sub-thread corresponding to the test thread, and then a sub-thread corresponding to the test thread is generated each time the test thread is started. Due to a copy-on-write mechanism of the Linux operating system, all generated sub-threads can share the recorded training model with the control service thread, so that when a plurality of threads are tested in parallel, only the control service thread needs to load the training model, and each sub-thread can share the training model with the control service thread, thereby reducing the memory consumption.
103. And testing by using a plurality of the sub-threads.
It should be noted that, after the control service calls the fork () function to generate the child thread in the above steps, based on the copy-on-write mechanism of the Linux operating system, the system will copy all values in the test thread into the newly generated child thread, so that the child thread can perform the same test operation as the test thread.
The embodiment of the invention provides an optimization method for parallel testing in machine learning, when a testing thread is started, a control service thread corresponding to the testing thread is triggered and started, a training model required by the testing thread during testing is loaded by using the started service thread, then the service thread is controlled to generate a sub-thread corresponding to the testing thread, and testing is performed in the generated sub-thread. Compared with the prior art, when a plurality of threads are tested in parallel, each thread needs to independently load the training model required by the test, the method can generate the corresponding sub-thread by controlling the service thread, and all the sub-threads share the training model loaded by the control service thread, so that the loading quantity of the training model is greatly reduced, the consumption of the memory is reduced, the thread quantity of the parallel test is increased, and the parallel test efficiency in machine learning is improved.
Further, as a refinement and an extension of the embodiment shown in fig. 1, the embodiment of the present invention further provides another optimization method for parallel testing in machine learning, as shown in fig. 2.
201. When the test thread starts, the control service thread is started.
And the control service thread is used for loading a training model required by the test thread during testing. For a specific conceptual explanation of the control service thread, reference may be made to the corresponding description in step 101, and details are not repeated here.
In addition, when a plurality of test units based on machine learning exist in the same terminal device, at this time, each test thread under each test unit carries identification information of the test unit, so for the embodiment of the present invention, before the step 201, the method may further include: extracting the control service thread corresponding to the test unit according to the identification information corresponding to the test unit carried by the test thread; and triggering the control service thread to start and load the training model corresponding to the test unit. It should be noted that each test unit is configured with a corresponding control service thread, so when any one of the test threads of the test unit is started, the corresponding control service thread needs to be extracted according to the identification information of the test unit carried by the test thread, so that the training model required by the current test unit during testing is loaded on the control service thread.
For the embodiment of the invention, the control service thread can be accurately distributed to the corresponding test unit by carrying the identification information of the test unit in the test thread and extracting the control service thread corresponding to the current test unit according to the identification information, so that the problem of calling error caused by the fact that a plurality of test units use the same control service thread is avoided, the calling accuracy of the control service thread is ensured, and the accuracy of parallel testing in machine learning is improved.
202. And triggering the control service thread to generate sub-threads corresponding to the test threads respectively.
Wherein the plurality of sub-threads share the training model that controls the loading of the service thread. The conceptual explanation of the sub-thread and the specific implementation in this step can refer to the corresponding description in step 102, and are not repeated here.
203. And testing by using a plurality of the sub-threads.
The test may be data prediction, data judgment, and the like, which is not specifically limited in this embodiment of the present invention.
It should be noted that, after the control service calls the fork () function to generate the child thread in the above steps, based on the copy-on-write mechanism of the Linux operating system, the system will copy all values in the test thread into the newly generated child thread, so that the child thread can perform the same test operation as the test thread.
204. And acquiring the test result of each sub-thread and respectively sending the test result to the test thread corresponding to the identification information in the sub-thread.
And the test thread and the sub-thread adopt local socket communication. The test result may be a null value, a numeric result of 0 or 1, a text result of "yes" or "no", and the like, and may be set according to different application scenarios, which is not specifically limited in the embodiment of the present invention.
It should be noted that the communication established by using the socket communication method is bidirectional communication, that is, a test thread as a parent thread can transmit information to a child thread, and the child thread can also transmit a test result to the parent thread. For the embodiment of the invention, the testing result is sent to the testing thread when the sub-thread testing is finished, so that the parallel testing is still performed in the testing thread in form, the interaction between the testing thread and other threads is not influenced, and the parallel testing is actually performed in the sub-thread, thereby realizing the completion of the parallel testing while avoiding occupying excessive memory.
205. Detecting whether the child thread under test exists.
206. And if the sub thread under test does not exist, triggering the control service thread to stop working.
It should be noted that, when there is no sub-thread under test, it indicates that the current test unit has completed all the test tasks, so the control service thread corresponding to the test unit is triggered to stop working at this time.
Further, according to the method described in step 201-206, in combination with the schematic diagram shown in fig. 3, an embodiment of the present invention may further provide an implementation manner of parallel testing in machine learning in combination with a specific application scenario, where the implementation process is divided into five execution steps, specifically, as follows:
the method comprises the steps of firstly, triggering a control service thread to start when any test thread of a test unit starts, loading a training model required by the test thread during testing under the control service thread, and simultaneously controlling the service thread to generate a sub-thread corresponding to the test thread.
And secondly, starting working in the sub-thread generated in the previous step, copying all values in the test thread into the sub-thread, and starting testing in the sub-thread. After the sub-thread finishes the test, the obtained test result is returned to the corresponding test thread
And thirdly, starting one or more testing threads when the testing threads in the steps are tested, triggering the control service thread to sequentially generate sub-threads corresponding to the testing threads at the moment, wherein the plurality of sub-threads share the training model loaded by the control service thread, and the training model does not need to be loaded independently.
And fourthly, interacting the generated multiple sub-threads with the corresponding test threads according to the step two.
And fifthly, controlling the service thread to detect whether the sub-thread executing the test operation still exists, and triggering the control service thread to stop working if all the sub-threads finish the test operation, so that the parallel test operation in the machine learning is finished.
However, it should be noted that the specific implementation described in the above application scenarios is only an example, and is not the only specific implementation of the embodiment of the present invention, and is only one of the optimized implementations of the method according to the present invention.
Further, as an implementation of the method shown in fig. 1, an embodiment of the present invention further provides an optimization apparatus for parallel testing in machine learning, which is used for implementing the method shown in fig. 1. The embodiment of the apparatus corresponds to the embodiment of the method, and for convenience of reading, details in the embodiment of the apparatus are not repeated one by one, but it should be clear that the apparatus in the embodiment can correspondingly implement all the contents in the embodiment of the method. As shown in fig. 4, the apparatus includes: a starting unit 31, a generating unit 32, a testing unit 33, wherein
The starting unit 31 may be configured to start a control service thread when the test thread is started, where the control service thread is used to load a training model required by the test thread during testing.
The generating unit 32 may be configured to trigger the control service thread started by the starting unit 31 to generate sub-threads corresponding to the test threads, where the plurality of sub-threads share the training model loaded by the control service thread.
The testing unit 33 may be configured to perform a test using the child thread generated by the generating unit 32.
Further, as an implementation of the method shown in fig. 2, another optimization device for parallel testing in machine learning is provided in the embodiment of the present invention, and is used for implementing the method shown in fig. 2. The embodiment of the apparatus corresponds to the embodiment of the method, and for convenience of reading, details in the embodiment of the apparatus are not repeated one by one, but it should be clear that the apparatus in the embodiment can correspondingly implement all the contents in the embodiment of the method. As shown in fig. 5, the apparatus includes: a starting unit 41, a generating unit 42, a testing unit 43, wherein
The starting unit 41 may be configured to start a control service thread when the test thread is started, where the control service thread is configured to load a training model required by the test thread during testing.
The generating unit 42 may be configured to trigger the control service thread started by the starting unit 41 to generate sub-threads corresponding to the test threads, where the plurality of sub-threads share the training model loaded by the control service thread.
The testing unit 43 may be configured to perform a test using the child thread generated by the generating unit 42.
Further, the apparatus further comprises: an extraction unit 44.
The extracting unit 44 may be configured to extract the control service thread corresponding to the test unit according to the identification information, which is carried by the test thread and corresponds to the test unit.
The starting unit 41 may be specifically configured to trigger the control service thread to start and load the training model corresponding to the test unit.
Further, the apparatus further comprises:
an obtaining unit 45 may be configured to obtain a test result of each of the sub threads.
The sending unit 46 may be configured to send the test results to the test threads corresponding to the identification information in the sub-threads respectively.
Further, the apparatus further comprises:
a detection unit 47 may be used to detect whether there is the child thread being tested.
A stopping unit 48, which may be used to trigger the control service thread to stop working if there is no sub-thread under test.
The embodiment of the invention provides another optimization device for parallel testing in machine learning. The device comprises: the device comprises a starting unit, a generating unit and a testing unit. The method comprises the steps of triggering a control service thread to generate a sub-thread corresponding to the test thread when the test thread is started, loading the training model in the control service thread, and enabling the plurality of sub-threads and the control service thread to share the training model, so that the loading number of the training model is greatly reduced, the consumption of the memory is reduced, the number of threads of the parallel test is increased, and the efficiency of the parallel test in machine learning is improved.
The text processing device comprises a processor and a memory, the starting unit 31, the generating unit 32, the testing unit 33 and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the efficiency of parallel testing in machine learning is improved by adjusting kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a storage medium, on which a program is stored, and when the program is executed by a processor, the method for optimizing parallel tests in machine learning is implemented.
The embodiment of the invention provides a processor, which is used for running a program, wherein the optimization method of parallel testing in machine learning is executed when the program runs.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein the processor executes the program and realizes the following steps: when a testing thread is started, starting a control service thread, wherein the control service thread is used for loading a training model required by the testing thread during testing; triggering the control service thread to generate sub-threads corresponding to the test threads respectively, wherein the training model is shared by the sub-threads; and testing by using a plurality of the sub-threads.
Further, each sub thread carries identification information corresponding to the test thread, and after the test is performed by using the plurality of sub threads, the method further includes:
and each sub-thread respectively acquires a test result and sends the test result to the test thread corresponding to the sub-thread.
Furthermore, local socket communication is adopted between the test thread and the sub-thread.
Further, the method further comprises:
detecting whether the child thread under test exists;
if not, triggering the control service thread to stop working.
Further, the test thread further carries identification information corresponding to the test unit, and when the test thread is started, starting the control service thread includes:
extracting the control service thread corresponding to the test unit according to the identification information corresponding to the test unit;
and triggering the control service thread to start and load the training model corresponding to the test unit.
An embodiment of the present invention further provides a computer program product, which, when executed on a data processing apparatus, is adapted to execute a program that initializes the following method steps: when a testing thread is started, starting a control service thread, wherein the control service thread is used for loading a training model required by the testing thread during testing; triggering the control service thread to generate sub-threads corresponding to the test threads respectively, wherein the training model is shared by the sub-threads; and testing by using a plurality of the sub-threads.
Further, each sub thread carries identification information corresponding to the test thread, and after the test is performed by using the plurality of sub threads, the method further includes:
and each sub-thread respectively acquires a test result and sends the test result to the test thread corresponding to the sub-thread.
Furthermore, local socket communication is adopted between the test thread and the sub-thread.
Further, the method further comprises:
detecting whether the child thread under test exists;
if not, triggering the control service thread to stop working.
Further, the test thread further carries identification information corresponding to the test unit, and when the test thread is started, starting the control service thread includes:
extracting the control service thread corresponding to the test unit according to the identification information corresponding to the test unit;
and triggering the control service thread to start and load the training model corresponding to the test unit.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method for optimizing parallel tests in machine learning is characterized by comprising the following steps:
when a testing thread is started, starting a control service thread, wherein the control service thread is used for loading a training model required by the testing thread during testing;
triggering the control service thread to generate sub-threads corresponding to the test threads respectively, wherein the plurality of sub-threads share the training model loaded by the control service thread;
and testing by using a plurality of the sub-threads.
2. The method according to claim 1, wherein each sub-thread carries identification information corresponding to the test thread, and after the testing is performed by using the plurality of sub-threads, the method further comprises:
and acquiring the test result of each sub-thread and respectively sending the test result to the test thread corresponding to the identification information in the sub-thread.
3. The method of claim 2, wherein the test thread and the child thread communicate using a local socket.
4. The method according to any one of claims 1-3, further comprising:
detecting whether the child thread under test exists;
if not, triggering the control service thread to stop working.
5. The method according to any one of claims 1 to 3, wherein the test thread further carries identification information corresponding to the test unit, and when the test thread is started, starting the control service thread comprises:
extracting the control service thread corresponding to the test unit according to the identification information corresponding to the test unit carried by the test thread;
and triggering the control service thread to start and load the training model corresponding to the test unit.
6. An apparatus for optimizing parallel tests in machine learning, comprising:
the starting unit is used for starting a control service thread when a test thread is started, and the control service thread is used for loading a training model required by the test thread during testing;
the generating unit is used for triggering the control service thread to generate sub-threads corresponding to the test threads respectively, and the plurality of sub-threads share the training model loaded by the control service thread;
and the test unit is used for testing by utilizing a plurality of sub threads.
7. The apparatus of claim 6, further comprising:
a detecting unit for detecting whether the sub thread under test exists;
and the stopping unit is used for triggering the control service thread to stop working if the sub-thread under test does not exist.
8. The apparatus of claim 6, wherein the test thread further carries identification information corresponding to a test unit, and the starting unit comprises:
the extraction module is used for extracting the control service thread corresponding to the test unit according to the identification information which is carried by the test thread and corresponds to the test unit;
and the starting module is used for triggering the control service thread to start and load the training model corresponding to the test unit.
9. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the optimization method for parallel testing in machine learning according to any one of claims 1 to 5.
10. A processor, configured to execute a program, wherein the program executes the method for optimizing parallel testing in machine learning according to any one of claims 1 to 5.
CN201811159379.5A 2018-09-30 2018-09-30 Optimization method and device for parallel test in machine learning Pending CN110968499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811159379.5A CN110968499A (en) 2018-09-30 2018-09-30 Optimization method and device for parallel test in machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811159379.5A CN110968499A (en) 2018-09-30 2018-09-30 Optimization method and device for parallel test in machine learning

Publications (1)

Publication Number Publication Date
CN110968499A true CN110968499A (en) 2020-04-07

Family

ID=70029009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811159379.5A Pending CN110968499A (en) 2018-09-30 2018-09-30 Optimization method and device for parallel test in machine learning

Country Status (1)

Country Link
CN (1) CN110968499A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010712A1 (en) * 2009-06-18 2011-01-13 Thober Mark A Methods for Improving Atomicity of Runtime Inspections
CN102768637A (en) * 2011-05-05 2012-11-07 阿里巴巴集团控股有限公司 Method and device for controlling test execution
CN102955721A (en) * 2011-08-16 2013-03-06 阿里巴巴集团控股有限公司 Device and method for pressure generation for testing
CN108280027A (en) * 2018-02-08 2018-07-13 金蝶软件(中国)有限公司 A kind of concurrently debugging rendering intent and device of script

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010712A1 (en) * 2009-06-18 2011-01-13 Thober Mark A Methods for Improving Atomicity of Runtime Inspections
CN102768637A (en) * 2011-05-05 2012-11-07 阿里巴巴集团控股有限公司 Method and device for controlling test execution
CN102955721A (en) * 2011-08-16 2013-03-06 阿里巴巴集团控股有限公司 Device and method for pressure generation for testing
CN108280027A (en) * 2018-02-08 2018-07-13 金蝶软件(中国)有限公司 A kind of concurrently debugging rendering intent and device of script

Similar Documents

Publication Publication Date Title
US11307849B2 (en) Method for creating hyperledger fabric network, controller and storage medium
US10789271B2 (en) System, method, and apparatus for synchronization among heterogeneous data sources
CN108089856B (en) Page element monitoring method and device
CN111131352B (en) Theme switching method and device
CN109583223B (en) Detection method and device for big data safety deployment
CN105302717A (en) Detection method and apparatus for big data platform
CN112306411B (en) Data storage method and device, nonvolatile storage medium and processor
CN111026080A (en) Hardware-in-loop test method and device for controller
CN111949513A (en) Configuration file loading method and device, electronic equipment and readable storage device
CN104298589A (en) Performance test method and performance test equipment
CN110968499A (en) Optimization method and device for parallel test in machine learning
CN107390982B (en) Screenshot method, screenshot equipment and terminal equipment
CN109709418B (en) Detection method and device for charging facility, storage medium and processor
CN112559313A (en) Test case setting method and device, storage medium and electronic equipment
CN118072806A (en) Memory detection method, integrated circuit device, storage medium, and laser radar
CN106202262B (en) Information processing method and electronic equipment
CN110968377A (en) Interface display processing method and device
CN104239199A (en) Virtual robot generation method, automatic test method and related device
CN110955813A (en) Data crawling method and device
CN110769017A (en) Data request processing method and device, storage medium and processor
CN110908876B (en) Method and device for acquiring hardware performance data
CN116302095A (en) Instruction jump judging method and device, electronic equipment and readable storage medium
CN113886342A (en) File format conversion method and device, storage medium and processor
CN108241573B (en) Integrated test code generation method and device
CN109992466B (en) Virtual machine fault detection method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200407

RJ01 Rejection of invention patent application after publication