WO2013016429A2 - Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations - Google Patents
Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations Download PDFInfo
- Publication number
- WO2013016429A2 WO2013016429A2 PCT/US2012/048151 US2012048151W WO2013016429A2 WO 2013016429 A2 WO2013016429 A2 WO 2013016429A2 US 2012048151 W US2012048151 W US 2012048151W WO 2013016429 A2 WO2013016429 A2 WO 2013016429A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- autoverification
- parameter
- rule
- output
- applying
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/40—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
Definitions
- Clinical laboratories either in government, educational, or private settings include different types of laboratory equipment for performing various tests on samples.
- the different pieces of equipment that perform these tests include hematology, coagulation, immunoassay, and chemistry analyzers. Since certain samples require testing on more than a single piece of equipment, a computer system is often used to synchronize and prioritize testing such that samples are processed efficiently. Using a computer system to perform this function is far more efficient than using a human operator to manage the tests, thus reducing costs, speeding analysis, and ensuring accurate results. In fact, clinical laboratories that limit or even eliminate human operators may produce the most accurate results. While computers help ensure accuracy in laboratory performance, there are a number of issues that must be addressed before the test results of a particular laboratory may be relied upon for diagnostic purposes.
- An autoverification operation is the process of using one or more rules to automatically validate a test result by a computer. Each operation generates an autoverification outcome for the clinical test result.
- Example autoverification outcomes include: (i) validate the test result; (ii) hold the test result for manual review; (iii) rerun the test; (iv) dilute the sample (and rerun the test on the diluted sample); and (v) cancel the test.
- An autoverification operation may comprise a single, complex rule that is difficult to enter or understand because it is composed of compound logical expressions with multiple branches in the flow of logic through the rule.
- One way of making such an autoverification operation easier to enter and understand is to break this complex rule up into multiple simpler rules or subparts. Each rule performs a specific task (such as determining if a value is within a given range or comparing one value with a prior value).
- Each rule then generates as output a suggested action. If an autoverification operation contains only one rule, this suggested action becomes the outcome of the operation. However, multiple rules applied to a clinical test result may generate conflicting suggested actions, and a decision must be made as to which suggested action will be the outcome of the autoverification operation. It would be advantageous to enable the computer to make that decision, thereby reducing the operator's workload and making the validation process more efficient. Prior approaches to reconciling this conflict include holding the results for manual review, presenting each possible action to the user and awaiting the user's choice of which action to take.
- Another prior approach requires the user to choose the order in which the multiple rules are applied so that only a single suggested action is generated (the one associated with the rule applied first), regardless of the possibility of a preferred other action from a later-applied rule. This results in reduced efficiency, often with an excessive number of test results requiring operator review.
- Simulations are used to confirm that an autoverification operation functions as intended.
- Example test results, or parameters indicative of a test result are provided by an operator, who then confirms that application of the operation to each example test result provides the expected autoverification outcome.
- Particularly complicated autoverification operations may involve multiple rules, or rules having multiple logic segments (i.e., subparts) and, therefore, multiple possible actions.
- Changes to the configuration of the laboratory require evaluation of an operation to ensure that the operation performs as expected.
- the operation can be validated by applying the operation to a set of example test results or parameters and verifying that the operation provides the expected outcome.
- a clinical laboratory may be configured with multiple laboratory instruments. During operation, the configuration of the lab may change, such as when an instrument goes off-line or when a particular reagent or test becomes unavailable. This changed configuration may affect the validity of an autoverification operation if any part of that operation is dependent on a particular laboratory instrument or test that is lost in the changed configuration.
- the technology relates to a method of processing a clinical test result, the method including: applying a first rule to the clinical test result to generate a first suggested action; applying a second rule to the clinical test result to generate a second suggested action, wherein the second suggested action is associated with a weighted factor; and automatically performing the second suggested action based at least in part on the weighted factor.
- the technology relates to a method of evaluating an autoverification operation, the method including: applying a first logic segment to a first parameter to generate a first logic decision; and applying a second logic segment to a second parameter to generate a second logic decision, wherein applying the second logic segment requires applying the first logic segment to the first parameter.
- the technology relates to a method of evaluating an autoverification operation, the method including: receiving a first parameter to evaluate a first logic segment of the autoverification operation; applying the autoverification operation to the first parameter to generate a first output; receiving a second parameter to evaluate a second logic segment of the autoverification operation; and applying the autoverification operation to the first parameter and the second parameter to generate a second output, wherein the step of receiving the second parameter occurs after the step of applying the autoverification operation to the first parameter to generate the first output.
- the technology relates to a method of evaluating an autoverification operation, the method including: applying, to a first parameter, the autoverification operation based on a first laboratory configuration to generate a first output; detecting a change in the first laboratory configuration, the change resulting in a second laboratory configuration; applying, to the first parameter, the autoverification operation based on the second laboratory configuration to generate a second output; and comparing the first output to the second output.
- the technology relates to a method of evaluating a rule of an autoverification operation, the method including: receiving a first parameter to evaluate a first logic segment of the rule; applying the autoverification operation to the first parameter to generate a first output; receiving a second parameter to evaluate a second logic segment of the rule; and applying the autoverification operation to the second parameter to generate a second output, wherein the step of applying the autoverification operation to the second parameter includes applying the first output as input to the second logic segment.
- FIG. 1 depicts an exemplary clinical laboratory testing system.
- FIG. 2 depicts a method for processing a clinical test result and reconciling conflict between conflicting suggested actions.
- FIGS. 3 and 4 depict methods for simulating autoverification operations in laboratory settings.
- FIG. 5 depicts a method of evaluating an autoverification operation when a laboratory configuration has changed.
- the technology disclosed herein addresses the problems with existing systems identified above.
- a schematic of a clinical laboratory 100 that would benefit from the disclosed technology is depicted in FIG. 1.
- the laboratory 100 includes a control computing devce 102 and one or more pieces of laboratory equipment 104. Any number of equipment (e.g., 104A, 104B, 104C, . . . 104N, where "N" represents any integer value), having virtually any function, may be utilized.
- An autoverification operation for a clinical test result includes one or more rules.
- Each rule includes one or more logical expressions or segments to generate a suggested action as an output, using the clinical test result as an input.
- the suggested action is what the autoverification outcome would be if the autoverification operation were composed of only a single rule.
- an autoverification operation includes the following two rules:
- the first rule determines whether the test result is within an instrument's normal operating range. It uses the test result value as input, and generates a suggestion action of 'validate' if the test result is within the range, and a suggested action of 'hold' if it is out of that range.
- the second rule determines whether the test result is below a medically critical value. It also uses the test result value as input, generating a suggested action of 'validate' if that value is at or below a critical level, and a suggested action of 'rerun' if it is above that level.
- the suggested actions include, for example, (i) validating the clinical test result, (ii) holding the clinical test result, (iii) rerunning a clinical test, (iv) diluting a clinical test sample, and (v) cancelling the test.
- the hold suggestion has the highest weighted value, followed by dilution/rerun/cancel, then validate.
- the weighted factors of these suggested actions may be changed by an operator.
- the weighted factors may be graphically displayed with a color coded icon or other indicator to identify importance of the suggested action. For example, the hold suggestion may be identified with a red icon, while the validate suggestion may be identified with a green icon.
- the factor is used to establish a priority among the various suggested actions.
- FIG. 2 One example of a method 120 for processing a clinical test result and reconciling conflict between conflicting suggested actions is depicted in FIG. 2.
- the method includes processing clinical test results by applying any number of rules of an autoverification operation to generate suggested actions, and then taking actions based on the results obtained.
- An operation may include a single rule or multiple rules.
- a validation rule may be a procedure that evaluates parameters of the test and sample and makes a decision to perform one operation or another on the test/sample/result.
- an operation may include a single, complex rule having a sequence of nodes or logic segments.
- a logic segment includes an input and either a logical decision or an action dependent on the input, wherein the input may be a test result, demographic information of the sample, or a decision from a previously executed logic segment.
- a first rule is applied to a clinical test result to generate a first suggested action in operation 124, which may be one of the actions identified above.
- a second rule is also applied to the clinical test result in operation 126, and a second suggested action is generated in operation 128.
- Either or both of these suggested actions may include a weighted factor, as described above.
- An operation 130 is performed to determine which weighted factor is more important.
- the action having the weighted factor of the most importance is then automatically performed in operation 132 by the computing device as the autoverification outcome of that clinical test.
- the steps of this method are depicted in sequence, the application of the first and second rules may be performed in parallel, or in any other order required or desired by the operator. Suggested actions resulting from operations having multiple rules are reserved until all the rules associated with a clinical test have been performed.
- the autoverification operation stops or is disabled upon the occurrence of certain events. For example, the autoverification operation is disabled when two suggested next actions conflict with each other (e.g., when the rules have the same weighted factor).
- the autoverification operation may also stop or be disabled if the user asynchronously takes an action on a test such as ordering a rerun over a pending dilution.
- a hold signal can be generated, and the user notified so that the user can decide the appropriate action to take.
- the autoverification operation includes Rule 1 and Rule 2 above, applying the two rules to a test result value that was above the instrument's operational range (and therefore also above the medically critical level) would generate the suggested actions of 'hold for manual review' (to investigate why the value is beyond a reportable range) and 'rerun the test' (to confirm a critical result). Because the out-of-range value may indicate a problem with the instrument or the sample, simply rerunning the test is likely to result in the same problem, and holding for manual review to investigate the cause of this problem is therefore the more efficient result. By pre-assigning a higher weighted factor to the suggested action of hold, the system can automatically select this more efficient outcome.
- Another problem with existing systems is the inability of those systems to ensure that the autoverification operations are running properly when they are set up.
- One solution proposed herein contemplates performing a simulation of an autoverification operation by displaying the functioning of the various logic segments of the operation as a step-by-step progression on a timeline. An operator may be prompted to enter parameters to begin a simulation procedure.
- a parameter may include any information that may be necessary to perform a particular operation or a part thereof.
- Exemplary parameters may include previous test results, data values (either based on real tests or an operator's expectations based on certain factors). Other parameters may include demographic information regarding the donor of the test sample, sample information such as the specimen type and draw time, and the tests ordered on the sample.
- the system displays as a timeline the steps taken by the system in applying the operation to the entered data, including the next action (or actions) that would be suggested by the system.
- the user may then enter example results (or user action) for this next suggested action, and the timeline updates to display further steps taken by the system, including the (new) next suggested action. In this way, a complete path through the operation (or the logic segments thereof) is evaluated in a step-by-step process that is easy for a user or operator to follow and to understand.
- Outputs obtained after performance of each logic segment may be used to verify that logic segments and simples rules are behaving as the user intended.
- Example outputs include, but are not limited to, validating a result, canceling a test, ordering a rerun, ordering a new test, and providing a test result.
- FIG. 3 depicts a particular method 140 for simulating an autoverification operation in a laboratory setting.
- a first parameter is applied to a first logic segment to produce a first logic decision.
- a second parameter is applied to a second logic segment to produce a second logic decision.
- the first logic segment must be reapplied in order to apply the second logic segment.
- the first logic segment need not be reapplied. Generation of multiple logic decisions continues until the simulation of the entire rule or operation is complete.
- the system then prompts the operator for that second parameter in operation 148.
- the system then applies the second parameter to the second logic segment of a rule in operation 1 0, and a second logic decision is generated in operation 152.
- This result, as well as the second logic segment, first parameter, first logic decision, and any other relevant information are then displayed, in operation 154.
- the entire simulation and therefore the entire operation, will be clearly understood by the operator, the system again shows the operator the first logic segment and other relevant information, prior to displaying the second logic segment, second parameter, second logic decision, and other relevant information. This process repeats until the entire operation has been simulated, thus allowing the user to verify the proper functionality of the operation.
- the simulation may be operator defined, such that only certain logic decisions, logic segments, etc., are displayed, depending on operator requirements.
- the first parameter is received by the system in operation 162, and used to evaluate a first logic segment of a rule of the autoverification operation.
- the first parameter may be received directly upon prompting an operator, or may be obtained from a data set entered previously by an operator, or a data set obtained from a sample test result.
- the entire autoverification operation is then applied to the first parameter in operation 164 to generate a first output in operation 166. If additional parameters are required, they are subsequently obtained by the system, as described above (as in operations 168, 170, and 172). As the simulation continues, the entire autoverification operation is now applied in operation 170 to the first parameter and the second parameter (as well as subsequent parameters, as the simulation proceeds further) to generate a second output in operation 172.
- This rule which evaluates only two ranges (an instrument range and a critical range) to determine the autoverification outcome, requires a complicated series of compound logical statements that are difficult to write and difficult to follow and understand.
- the method 160 depicted in FIG. 4 receives as a first parameter the first test result.
- the user enters a value that is above the usable range of the instrument.
- the system applies that value to a first logic segment of the rule (the segment that determines whether the value is within a usable range) to determine a first output (the action of diluting the sample for retest).
- the system displays this action, prompting the user to enter the test results for the diluted test sample (the second parameter) to continue progression through the rule.
- the user enters a value that is below the maximum usable range of the instrument but above a medically critical level, and the system applies that value to a second logic segment of the rule (determining whether the diluted test result is within range but critically high), and displays the result (hold for manual review of critically high result). In this way, the user is able to follow the rule through simple, progressive steps to ensure that the rule is performing each step as intended.
- test result of the diluted sample is greater than the maximum usable range of the instrument, then hold the result
- parameters are used to evaluate logic segments (i.e., subparts) of a first rule.
- One or more outputs from application of one or more logic segments to one or more parameters are produced. These outputs may be an intermediate output of a rule, and in certain cases, the simulation is unable to proceed beyond a first output in the absence of a second parameter.
- a second or subsequent output may be the final outcome of an autoverification operation.
- test results are obtained when first validating an autoverification operation, or when simulating an autoverification operation.
- the system detects whenever the configuration of the lab changes, and reevaluates the autoverification operation whenever the configuration changes by reapplying each example test result to the autoverification operation (or a logic segment thereof) and comparing the output with the previously saved output from the original evaluation of the autoverification operation.
- the system may automatically continue testing and autoverifying clinical test results if the autoverification operation is still valid.
- the system can automatically proceed if the changed configuration has no effect on the autoverification process, without operator intervention, thereby minimizing delays due to manual intervention.
- FIG. 5 One example of such a method 180 is depicted in FIG. 5.
- autoverification operation based on a first laboratory configuration is applied to a first parameter in operation 184.
- the parameter may be data entered by an operator based on previously obtained test results, or the parameter may be the results of a test itself. Alternatively, the parameter may be entered by the operator based on other factors (operator experience or expectations, for example).
- a first output is generated in operation 186, and all relevant information related thereto is stored.
- the autoverification operation may continue, and the system may continue to process samples 188 until it detects a change in the first laboratory configuration in operation 190, thus resulting in a second, updated laboratory configuration (set in operation 192).
- processing of samples would cease so proper functioning of the autoverification operation based on the second, updated configuration may be verified.
- the autoverification operation based on this second, updated laboratory configuration is then applied to the first parameter in operation 194, and a second output is generated in operation 196. If a comparison between the first and second outputs shows no difference in operation 198, the autoverification operation is determined to be functioning properly (i.e., it is verified) in operation 200, and further processing of samples may proceed. If there is a difference, further processing may be prohibited and a notification may be sent to an operator in operation 202.
- a change in lab configuration can be, e.g., the addition or removal of an instrument, a change in the assay menu of an instrument, a change in the model of an instrument, a change in the QC requirements of an instrument, a change in the performance of an assay, or a change in the acceptability criteria of a result, e.g., the validation range of a test, a delta check value, a critical limit value, or a sample type for an test.
- the system may detect the change by input from a user, or the system may detect the change automatically by monitoring the status and operation of instruments attached to or in communication with the system.
- the autoverification operation for a test for the cation sodium (Na) includes the following three rules:
- This autoverification operation requires test results for Na, hemolysis, chloride (CI), bicarbonate (C0 2 ), and potassium (K). This autoverification operation can therefore be valid only in a lab where each of these tests is available. If, for example, the lab configuration changes so that hemolysis results are no longer available, the system automatically reevaluates the autoverification operation and can notify the user that the Na test cannot be autoverified (or even manually verified - since both require hemolysis results) under the current lab configuration. This helps prevent wasteful tests that cannot be verified. Similarly, in this example, the loss of the ability to test for calcium, a cation that is not used to calculate AGAP, would not affect the autoverification of Na, and Na testing could automatically continue without manual intervention.
- the first rule of the Na autoverification operation determines whether the Na test result is within a set validation range.
- This validation range is part of the first laboratory configuration.
- the validation range is set to be between 100 and 200 units.
- a first parameter to evaluate whether the autoverification operation performs as expected is a value of 101 units. Applying the autoverification operation to this parameter would give an expected outcome of 'validate Na'. If the set validation range were changed to a range of 120 to 200 units, the system would detect this change to a second configuration and automatically reevaluate all autoverification operations.
- the technology described herein can be realized in hardware, software, or a combination of hardware and software.
- a typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
- the computing device 102 (shown in FIG. 1) is a computer system.
- Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
- a group of functional modules that control the operation of the CPU and effectuate the operations of the technology as described above can be located in system memory (on the server or on a separate machine, as desired).
- An operating system directs the execution of low-level, basic system functions such as memory allocation, file management, and operation of mass storage devices.
- a control block implemented as a series of stored instructions, responds to client-originated access requests by retrieving the user-specific profile and applying the one or more rules as described above.
- the software may be configured to run on any computing device or workstation such as a PC or PC-compatible machine, an Apple Macintosh, a Sun workstation, etc.
- any device can be used as long as it is able to perform all of the functions and capabilities described herein.
- the particular type of computing device, whether a workstation, or other system, is not central to the technology, nor is the configuration, location, or design of a database, which may be flat-file, relational, or object-oriented, and may include one or more physical and/or logical components.
- Such a computing device may include a network interface continuously connected to the network, and thus support numerous geographically dispersed users and applications.
- the network interface and the other internal components of the servers intercommunicate over a main bi-directional bus.
- the main sequence of instructions effectuating the functions of the technology can reside on a mass-storage device (such as a hard disk or optical storage unit) as well as in a main system memory during operation. Execution of these instructions and effectuation of the functions of the technology is accomplished by a processing device, such as a central-processing unit ("CPU").
- CPU central-processing unit
- the computing device typically includes at least some form of computer readable media, and such computer readable media can be used to store the computer program product (containing data instructions) thereon.
- Computer readable media includes any available media that can be accessed by the computing device.
- computer readable media include computer readable storage media and computer readable communication media.
- Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules or other data.
- Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory or other memory technology, compact disc read only memory, digital versatile disks or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the computing device 1 10.
- Computer readable storage media is a type of non- transitory storage media.
- Computer readable communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- computer readable communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Automatic Analysis And Handling Materials Therefor (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112014001822A BR112014001822A2 (en) | 2011-07-25 | 2012-07-25 | conflict reconciliation, incremental representation, and laboratory variation testing procedures for self-checking operations |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161511473P | 2011-07-25 | 2011-07-25 | |
US61/511,473 | 2011-07-25 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2013016429A2 true WO2013016429A2 (en) | 2013-01-31 |
WO2013016429A8 WO2013016429A8 (en) | 2013-04-04 |
WO2013016429A3 WO2013016429A3 (en) | 2014-01-30 |
Family
ID=46727573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2012/048151 WO2013016429A2 (en) | 2011-07-25 | 2012-07-25 | Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations |
Country Status (2)
Country | Link |
---|---|
BR (1) | BR112014001822A2 (en) |
WO (1) | WO2013016429A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111406294A (en) * | 2017-10-26 | 2020-07-10 | 拜克门寇尔特公司 | Automatically generating rules for laboratory instruments |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0619715B2 (en) * | 1987-12-03 | 1994-03-16 | シャープ株式会社 | Question answering device |
US7158890B2 (en) * | 2003-03-19 | 2007-01-02 | Siemens Medical Solutions Health Services Corporation | System and method for processing information related to laboratory tests and results |
US8868353B2 (en) * | 2007-02-02 | 2014-10-21 | Beckman Coulter, Inc. | System and method for testing autoverification rules |
-
2012
- 2012-07-25 WO PCT/US2012/048151 patent/WO2013016429A2/en active Application Filing
- 2012-07-25 BR BR112014001822A patent/BR112014001822A2/en not_active IP Right Cessation
Non-Patent Citations (1)
Title |
---|
None |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111406294A (en) * | 2017-10-26 | 2020-07-10 | 拜克门寇尔特公司 | Automatically generating rules for laboratory instruments |
CN111406294B (en) * | 2017-10-26 | 2024-02-13 | 拜克门寇尔特公司 | Automatically generating rules for laboratory instruments |
Also Published As
Publication number | Publication date |
---|---|
BR112014001822A2 (en) | 2017-02-21 |
WO2013016429A3 (en) | 2014-01-30 |
WO2013016429A8 (en) | 2013-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9703686B2 (en) | Software testing optimizer | |
EP2641179B1 (en) | Method and apparatus for automatic diagnosis of software failures | |
CN110674047B (en) | Software testing method and device and electronic equipment | |
US11176019B2 (en) | Automated breakpoint creation | |
CN110765596B (en) | Modeling method and device for auditing process simulation model and electronic equipment | |
CN110941547B (en) | Automatic test case library management method, device, medium and electronic equipment | |
CN107038120A (en) | A kind of method for testing software and equipment | |
CN103885898B (en) | The analysis system for analyzing biological sample with multiple operating environments | |
US10061679B2 (en) | Evaluating fairness in devices under test | |
KR101337216B1 (en) | Computer system and siglature verification server | |
CN116670692A (en) | Reinforcement learning for test suite generation | |
US8818783B2 (en) | Representing state transitions | |
US20160217017A1 (en) | Determining workflow completion state | |
US11639804B2 (en) | Automated testing of HVAC devices | |
CN112181485A (en) | Script execution method and device, electronic equipment and storage medium | |
WO2013016429A2 (en) | Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations | |
CN115964122A (en) | Method for operating an in-vitro diagnostic laboratory control software module | |
CN113126881B (en) | System configuration method, device, equipment, readable storage medium and distributed storage system | |
CN110008098B (en) | Method and device for evaluating operation condition of nodes in business process | |
CN114330859A (en) | Optimization method, system and equipment for real-time quality control | |
Jegourel et al. | On the sequential Massart algorithm for statistical model checking | |
CN115812195A (en) | Calculating developer time in a development process | |
EP1959382A1 (en) | Organisation representational system | |
CN113282482A (en) | Compatibility test method and system for software package | |
CN110459276A (en) | A kind of data processing method and relevant device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12750898 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2014/000792 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2014522967 Country of ref document: JP Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014001822 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12750898 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 112014001822 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140124 |