CN112817839B - Artificial intelligence engine testing method, platform, terminal, computing device and storage medium - Google Patents

Artificial intelligence engine testing method, platform, terminal, computing device and storage medium Download PDF

Info

Publication number
CN112817839B
CN112817839B CN202010933841.3A CN202010933841A CN112817839B CN 112817839 B CN112817839 B CN 112817839B CN 202010933841 A CN202010933841 A CN 202010933841A CN 112817839 B CN112817839 B CN 112817839B
Authority
CN
China
Prior art keywords
engine
artificial intelligence
sample
data
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010933841.3A
Other languages
Chinese (zh)
Other versions
CN112817839A (en
Inventor
曾璇
王小叶
丁小俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010933841.3A priority Critical patent/CN112817839B/en
Publication of CN112817839A publication Critical patent/CN112817839A/en
Application granted granted Critical
Publication of CN112817839B publication Critical patent/CN112817839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Abstract

The invention provides an artificial intelligence engine testing method and platform, a computing device and a storage medium. The method comprises the following steps: receiving a first user input comprising an artificial intelligence engine identification indicating at least one artificial intelligence engine and an artificial intelligence engine identification indicating at least one sample data; providing at least one sample of data to at least one artificial intelligence engine; acquiring standard output data from at least one of the at least one artificial intelligence engines; displaying at least one standard output data and corresponding sample data in the standard output data; receiving an operation of outputting data by a user aiming at least one standard; generating labeling information corresponding to the sample data based on at least one standard output data and the operation; at least one annotated sample packet is formed based on the generated annotation information and corresponding sample data. By the technical scheme, the sample labeling efficiency is improved, and the testing efficiency of the artificial intelligence engine is improved.

Description

Artificial intelligence engine testing method, platform, terminal, computing device and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an artificial intelligence engine testing method, an artificial intelligence engine testing platform, an artificial intelligence engine testing terminal, a computing device, and a computer readable storage medium.
Background
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In general, an AI engine model with a certain function (e.g., image recognition, optical character recognition, etc.) can be obtained by training a call-taking method based on, for example, a deep learning framework, and the model is engineering-packaged and converted into a usable service, i.e., AI engine. Such AI engines may be deployed on local computing devices or cloud servers, and users may access services provided by the AI engines directly or via a network.
Because the AI engine has a strong dependency on data, for example, before the AI engine is applied to a certain scene or when the distribution or concurrency of data in the scene to which the AI engine is applied changes, the AI engine needs to be tested to determine whether the AI engine meets the requirement of the current application scene. However, the conventional testing method is implemented by manually writing a script offline to call a sample of a local database and an artificial intelligence engine service interface, which results in more development requirements and maintenance costs of redundant codes, prevents improvement of testing efficiency, and cannot well utilize data generated by online services.
Disclosure of Invention
In view of the above, the present disclosure provides a method, platform and terminal, computing device and storage medium for testing an artificial intelligence engine, which aims to alleviate, mitigate or even eliminate the above mentioned problems and other problems that may exist.
In one aspect of the present invention, there is provided an artificial intelligence engine testing method comprising: receiving a first user input, the first user input comprising an artificial intelligence engine identification indicating at least one artificial intelligence engine and a sample data identification indicating at least one sample data for provision to the at least one artificial intelligence engine; providing the at least one sample data to the at least one artificial intelligence engine; obtaining standard output data from at least one of the at least one artificial intelligence engine, the standard output data comprising data generated by the artificial intelligence engine based on the respective sample data; displaying at least one standard output data and corresponding sample data in the standard output data; receiving an operation of a user for outputting data according to the at least one standard; generating labeling information of corresponding sample data based on the at least one standard output data and the operation; at least one annotated sample packet is formed based on the generated annotation information and corresponding sample data.
In some embodiments, the displaying at least one of the standard output data comprises: receiving a screening operation of a user for the standard output data, wherein the screening operation designates that the standard output data is screened based on at least one of the following standards: the specific error code exists, the standard output data comprises specific parameters, and the standard output data comprises parameters in a specific range; screening out at least one standard output data meeting the at least one standard; displaying the at least one standard output data.
In some embodiments, the annotated sample packet includes at least one of: the sample list comprises at least one of a sample data identifier, a sample data type identifier and a labeling information identifier; a sample data packet comprising a plurality of sample data, each sample data having a sample data identification; the label information package comprises a plurality of label information, and each label information is provided with a label information identifier.
In some embodiments, the providing the at least one sample data to the at least one artificial intelligence engine comprises, for each artificial intelligence engine, performing the steps of: mapping the at least one sample data into engine inputs usable by the artificial intelligence engine based on a predefined first mapping mechanism; the engine input is provided to the artificial intelligence engine.
In some embodiments, the obtaining standard output data from at least one of the at least one artificial intelligence engines comprises, for each artificial intelligence engine, performing the steps of: obtaining an engine output from the artificial intelligence engine, the engine output generated by the artificial intelligence engine based on the engine input; the engine output is mapped to the standard output data based on a predefined second mapping mechanism.
In some embodiments, the artificial intelligence engine test method further comprises: receiving a second user input, the second user input comprising an engine identification to be tested indicating at least one artificial intelligence engine to be tested and a sample package identification indicating at least one sample package of the at least one annotated sample package; providing sample data in the at least one sample packet to the at least one artificial intelligence engine to be tested; obtaining standard output data from the at least one artificial intelligence engine to be tested; a test result is generated based at least in part on the acquired standard output data.
In some embodiments, the generating test results based at least in part on the acquired standard output data comprises: and comparing the obtained standard output data with the labeling information corresponding to the corresponding sample data, and generating the test result based on the comparison result.
In some embodiments, the generating test results based at least in part on the acquired standard output data comprises: generating the test result based on at least one of: the number of standard output data acquired, the number of sample data provided, the time at which the standard output data is acquired, the time at which the sample data is provided.
In some embodiments, the artificial intelligence engine test method further comprises: initializing test parameters, the test parameters including at least one of: an initial number of test concurrency, a single round test duration, and an incremental number of test concurrency, wherein said providing sample data in said at least one sample packet to said at least one artificial intelligence engine to be tested comprises: the engine input is provided to the artificial intelligence engine based on the test parameters.
In some embodiments, the artificial intelligence engine test method further comprises: ending the test upon detection of a test ending condition, the test ending condition comprising at least one of: the number of the test rounds reaches a threshold value of the number of the rounds; in the latest test round, the average value of time delay of the artificial intelligence engine for a plurality of sample data is larger than or equal to a time delay threshold value, wherein the time delay is a time difference between the time of providing the sample data and the time of acquiring corresponding standard output data; in the latest test round, the response rate of the artificial intelligence engine is smaller than a response rate threshold value, wherein the response rate is the ratio of the number of acquired standard output data to the number of provided sample data; in the test rounds of the latest threshold times, the change amplitude of the query rate per second of the artificial intelligence engine is lower than an amplitude threshold, wherein the query rate per second is the ratio of the number of acquired standard output data to the time difference between providing first sample data and acquiring last standard output data; the query rate per second of the artificial intelligence engine decreases continuously during the most recent threshold number of test rounds.
In some embodiments, the engine under test identification indicates a plurality of artificial intelligence engines under test, and wherein the generating test results based at least in part on the obtained standard output data comprises: and generating summarized test results for the plurality of artificial intelligence engines to be tested.
In some embodiments, the artificial intelligence engine comprises a visual recognition type artificial intelligence engine, and the sample data comprises at least one of the following sample data: picture sample data or video sample data.
In some embodiments, the artificial intelligence engine test method further comprises: determining a class of artificial intelligence engines to be tested, selecting test metrics based on the class thereof, wherein the generating test results based at least in part on the engine output comprises: the test index is calculated based at least in part on the engine output.
According to another aspect of the present invention, there is provided an artificial intelligence engine test platform comprising: a first user interface configured to receive a first user input comprising an artificial intelligence engine identification indicating at least one artificial intelligence engine and a sample data identification indicating at least one sample data for provision to the at least one artificial intelligence engine; a providing module configured to provide the at least one sample data to the at least one artificial intelligence engine; an acquisition module configured to acquire standard output data from at least one of the at least one artificial intelligence engine, the standard output data comprising data generated by the artificial intelligence engine based on the respective sample data; a display module configured to display at least one of the standard output data and corresponding sample data; a second user interface configured to receive a user operation for outputting data for the at least one criterion; a generation module configured to generate annotation information for the corresponding sample data based on the at least one standard output data and the operation; a forming module configured to form at least one annotated sample packet based at least in part on the generated annotation information and corresponding sample data.
According to yet another aspect of the present invention, there is provided a terminal for artificial intelligence engine testing, comprising: the input interface is used for receiving input data, and the input data comprises an engine identifier to be tested and a sample packet identifier; a processor for providing sample data in at least one sample packet to the at least one artificial intelligence engine to be tested to obtain standard output data from the at least one artificial intelligence engine to be tested; and the display device is used for comparing and displaying visual summarized test results which are generated based on the standard output data and are specific to the at least one artificial intelligence engine to be tested, wherein when the at least one artificial intelligence engine to be tested contains different types of engines, different evaluation index results are displayed in the summarized test results according to the different types of engines.
According to yet another aspect of the present invention, there is provided a computing device comprising a memory and a processor, the memory being configured to store thereon computer-executable instructions that, when executed on the processor, perform the artificial intelligence engine test method described in the above aspects.
According to yet another aspect of the present invention, there is provided a computer readable storage medium having stored thereon computer executable instructions which, when executed on a processor, perform the artificial intelligence engine test method described in the above aspects.
Embodiments of the present invention allow for the retrieval of labeled sample packets by means of an artificial intelligence engine, which can be used to test the corresponding artificial intelligence engine. Specifically, a tester may generate annotation information and obtain an annotated sample package by providing sample data to an artificial intelligence engine, obtaining output data from the artificial intelligence engine, and viewing and/or modifying some or all of the data therein, etc. Alternatively, the tester may obtain data generated when the artificial intelligence engine provides on-line services, and view and/or modify some or all of the data therein, etc., to generate labeling information and obtain labeled sample packages. Therefore, the efficiency of sample labeling can be improved, and the labor and time cost required by labeling can be reduced. And, this also helps to achieve efficient utilization of data generated during on-line services of the artificial intelligence engine, helping to quickly generate large amounts of labeled sample data based on such data, thereby further reducing labeling costs.
In addition, the embodiment of the invention can normalize the data under the respective protocols of different artificial intelligence engines through a predefined mapping mechanism, so that a user such as a tester can test the different artificial intelligence engines through the marked sample packet with the standard format without considering the specific data format of the different artificial intelligence engines, thereby avoiding the time cost of knowing the data protocol of the artificial intelligence engines by the user and being beneficial to improving the test efficiency. Furthermore, the invention can realize an automatic test flow, and after the artificial intelligence engine is accessed to the test platform provided by the embodiment of the invention, a user can easily realize the test process by only inputting the identification of the artificial intelligence engine to be tested and the identification of the sample packet used for testing. This greatly reduces the code writing effort required for testing, which is highly advantageous for reducing code maintenance overhead and improving test efficiency.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Further details, features and advantages of the invention are disclosed in the following description of exemplary embodiments with reference to the following drawings, in which:
Fig. 1 schematically shows an example scenario in which the technical solution of the invention may be applied;
FIG. 2A schematically illustrates an example flow chart of a method according to some embodiments of the invention;
FIG. 2B schematically illustrates an example flow chart of a method according to further embodiments of the invention;
FIG. 3A schematically illustrates an example user interface according to some embodiments of the invention
FIG. 3B schematically illustrates the example user interface of FIG. 3A including user input information;
FIG. 4 schematically illustrates an example architecture of a station in an AI engine in accordance with some embodiments of the invention;
FIG. 5 schematically illustrates an example data protocol of a station in an AI engine in accordance with some embodiments of the invention;
FIG. 6 schematically illustrates an example of a data presentation interface according to some embodiments of the invention;
FIG. 7 schematically illustrates an example of a data annotation interface according to some embodiments of the invention;
FIG. 8 schematically illustrates an example of another data annotation interface according to some embodiments of the invention;
9A-9C schematically illustrate examples of sample packets according to some embodiments of the invention;
FIG. 10 schematically illustrates an example flow diagram of a station interior in an AI engine in accordance with some embodiments of the invention;
FIG. 11 schematically illustrates an example of a test report of an AI engine in accordance with some embodiments of the invention;
FIGS. 12A-12B schematically illustrate examples of aggregate test results for multiple AI engines in accordance with some embodiments of the invention;
FIG. 13 schematically illustrates an example flow chart of an accuracy test according to some embodiments of the invention;
FIG. 14 schematically illustrates an example flow chart of a pressure test according to some embodiments of the invention;
FIG. 15 schematically illustrates example results of a pressure test according to some embodiments of the invention;
FIG. 16 schematically illustrates an example block diagram of an AI engine test platform in accordance with some embodiments of the invention;
FIG. 17 schematically illustrates an example block diagram of a terminal for artificial intelligence engine testing in accordance with some embodiments of the invention;
FIG. 18 schematically illustrates an example block diagram of a computing device according to some embodiments of the invention.
Detailed Description
Before describing embodiments of the present invention in detail, some related concepts will be explained first:
1. artificial intelligence engine (AI engine): an Engine (Engine) is a core component of a development program or system on an electronic platform. With the engine, a developer can quickly build, lay out functions required by the program, or assist in the operation of the program. Generally, an engine is a supporting part of a program or a set of systems. The AI engine may be invoked as a support part of the AI system through an engine interface protocol, and the caller need not be aware of the AI engine's internal architecture, but need only provide it with inputs conforming to its protocol, and receive outputs therefrom, which may be used for different purposes depending on the caller's needs.
2. Sample: in statistics, a sample is a portion of an individual observed or investigated, and the population is the entirety of the subject. In the field of artificial intelligence, a sample may generally relate to a training sample, a validation sample, a test sample. Herein, unless otherwise specified, "sample" generally refers to a test sample for testing an AI engine, which generally includes sample data and labeling information.
3. Labeling information: in a test sample, for each sample data, there may be one corresponding annotation information, which is the correct output information that the AI engine being tested is expected to obtain based on that sample data. For example, in a test sample for an OCR (optical character recognition) class AI engine, the annotation information may be text included in the picture; in a test sample of the human face comparison AI engine, the labeling information can indicate whether human faces in two pictures are the same person or not; etc. The labeling information may be manually pre-labeled.
Fig. 1 schematically shows an example scenario 100 in which the technical solution of the invention may be applied.
As shown, the scenario 100 includes an AI engine test platform server 110 and a sample database 112 and at least one AI engine server 114 connected to the AI engine test platform server 110 via a communication link. The AI engine test platform server 110 may have deployed thereon an AI engine test platform provided in accordance with some embodiments of the invention, which may perform AI engine test methods provided in accordance with some embodiments of the invention. In some embodiments, the AI engine test platform may include a user interface, an AI engine midstand, and the like, as will be described in further detail below. The sample database 112 may store at least one sample package for testing the AI engine. The AI engine server 114 may have an AI engine disposed thereon, such as an image recognition engine, an OCR engine, and the like. The AI engine test platform deployed on the AI engine test platform server 110 may generate labeled sample packages based on output and user operations from at least one AI engine deployed on the AI engine server 114, and store the labeled sample packages in the sample database 112, may also call and read sample packages in the sample database 112, test AI engines deployed on the at least one AI engine server 114 with samples in the read sample packages, receive output from the AI engine server 114 under test, and automatically generate test results based on the received output.
Alternatively, the AI engine test platform server 110 can be connected to or include I/O devices 116. The user 118 may send instructions or input information to the AI engine test platform via the I/O device 116, e.g., to modify or confirm output data from the AI engine, initiate test tasks, etc., and may view data returned by the AI engine test platform via the I/O device 116, e.g., output data from the AI engine, test results of the AI engine, etc. Or, alternatively, the user 122 may access services provided by the AI engine test platform server 110 via the network 130 through the terminal device 120, such as sending instructions, entering information, or viewing data, as described above.
In the application, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
For example, the AI engine server 114 and/or AI engine test platform server 110 may be cloud servers capable of providing cloud computing services to users, e.g., which may be deployed on public clouds. Public clouds generally refer to a cloud that third party providers offer to users that is capable of using, typically through the internet, which may be free or inexpensive. The core attribute of public clouds is shared resource services. Thus, the user may access the AI services and/or AI engine test services provided by these servers over the network.
In the present application, the network may be a wired network connected via a cable, an optical fiber, or the like, or may be a wireless network such as 2G, 3G, 4G, 5G, wi-Fi, bluetooth, zigBee, li-Fi, or the like.
In addition, in the application, the database can be regarded as an electronic filing cabinet, namely a place for storing electronic files, and a user can perform operations such as adding, inquiring, updating, deleting and the like on data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple users, with as little redundancy as possible, independent of the application.
Unlike classical programming where explicit logic is written artificially, for a data-driven AI engine model, the model will be trained with data so that the model can automatically determine rules. Further, the determined rule may be applied to new data and an answer may be derived. Therefore, the performance of the AI engine will be greatly affected by the training data and the data in actual use. For example, when an AI engine trained using data in a certain class of scene is applied to a particular instance of the scene in that class of scene, or when the data in the scene to which the AI engine is applied changes over time (such as changes in data distribution, data concurrency, etc.), or when a user intends to select an AI engine best suited for the current scene, etc., it will be necessary to test the performance of at least one AI engine to determine whether it can meet the requirements of the current application scene. Thus, the need to test AI engines using test samples can be quite extensive.
Currently, test samples are typically generated by manually labeling sample data. This labeling process requires a lot of labor and time costs, and it is difficult to obtain a large number of labeled samples in a short time. Thus, the technical scheme of the invention generates labeling information by means of the output of the artificial intelligence engine and generates a labeled sample packet. This can greatly increase sample labeling efficiency, helping to quickly obtain a large number of labeled samples for testing the corresponding AI engine. Meanwhile, the technical scheme of the invention can allow the sample to be acquired by utilizing the data generated during the online service of the artificial intelligence engine, thereby being beneficial to further reducing the acquisition cost of the sample.
In addition, in practice, some parts of the test flows of different AI engines are the same or similar, such as reading sample data, providing the sample data to the AI engine under test according to preset rules, receiving an output of the AI engine under test, calculating a corresponding test index based on the output, generating a test report, and the like. Therefore, the technical scheme of the invention integrates similar or same parts in the test flow of the AI engine, so that a user can conveniently complete the test of various AI engines without wasting too much time on knowing the data protocol of the AI engine, writing test codes and the like, thereby greatly improving the test efficiency. Meanwhile, through a standardized test flow, interference of human factors can be avoided, so that reliability of a test result can be improved to a certain extent.
The AI engine test methods provided in accordance with some embodiments of the invention are described in detail below in conjunction with fig. 2A-15.
Fig. 2A and 2B schematically illustrate flow diagrams of AI engine testing methods 200A and 200B, respectively, provided in accordance with some embodiments of the invention. The methods 200A and 200B may be performed by an AI engine test platform deployed on the AI engine test platform server 110.
As shown in fig. 2A, method 200A includes: receiving a first user input, the first user input comprising an artificial intelligence engine identification and a sample data identification, the artificial intelligence engine identification indicating at least one artificial intelligence engine, the sample data identification indicating at least one sample data for provision to the at least one artificial intelligence engine (step 210); providing at least one sample of data to at least one artificial intelligence engine (step 220); obtaining standard output data from at least one of the at least one artificial intelligence engines, the standard output data including data generated by the artificial intelligence engines based on the respective sample data (step 230); displaying at least one of the standard output data and the corresponding sample data (step 240); receiving an operation of outputting data by a user for at least one criterion (step 250); generating labeling information corresponding to the sample data based on the at least one standard output data and the operation (step 260); at least one annotated sample packet is formed based on the generated annotation information and corresponding sample data (step 270).
Specifically, at step 210, a first user input may be received via a first user interface, such as via interface 300A shown in fig. 3A. As shown in fig. 3A, a user may select at least one AI engine by selecting an engine provider and/or engine capability, etc. (i.e., AI engine identification), may also select more or less AI engines by "+" or "-" on the right, and may select at least one sample data to be provided to the at least one AI engine by selecting or entering a sample packet label. The engine provider may refer to, for example, an AI engine laboratory, which may train and package AI engine models into usable services, such as a Tencel optimization graph laboratory, tencel AILAB laboratory, and the like. Engine capability may refer to AI engine services with specific functions such as face alignment, video in-vivo detection, identification card OCR, etc. The specimen package label may be the number, name, etc. of the specimen package. Alternatively, the corresponding AI engine may be selected by other AI engine identification, such as by the name, number, etc. of the AI engine, or the particular AI engine providing the service may be automatically determined in the AI engine cluster by the AI engine domain name shown in the figure or by a load balancing mechanism. Alternatively, the corresponding sample data may be selected by other sample data identification, for example by the sample data itself, the name of the sample data, the address of the sample data, etc. Other settings may also be made for a particular engine capability, such as for a face alignment engine, for example, a threshold as shown may be set to specify a threshold similarity for determining whether two faces are the same person, for example. In addition, the user may set a task name to facilitate subsequent viewing of the results of the present task, and may select or fill in the execution date and execution time of the task, or the test platform may automatically generate such information or use default settings. After the input is completed, the user can finish the creation of the task by clicking the "confirm" button below.
The interface 300A may be presented on the I/O device 116 in the scenario 100 shown in fig. 1 or may be presented on a display of the terminal device 120, such that the user 118 or 122 may input the AI engine identification, sample data identification, and optionally other information or settings described above, either locally or remotely, through the I/O device 116 or an input device of the terminal device 120. Further, interface 300A is merely exemplary, and the first user input may also be received via other user interfaces.
In some embodiments, the AI engine may be a visual recognition type artificial intelligence engine, such as face alignment, identification card OCR, action living, etc., and the sample data may include at least one of the following sample data: picture sample data or video sample data.
In step 220, the corresponding sample data may be read according to the sample data identification included in the first user input and provided to at least one AI engine indicated by the AI engine identification in the first user input.
In some embodiments, providing at least one sample data to at least one artificial intelligence engine may include performing, for each artificial intelligence engine, the steps of: mapping at least one sample of data into an engine input usable by the artificial intelligence engine based on a predefined first mapping mechanism; engine input is provided to the artificial intelligence engine. The first mapping mechanism of each of the at least one artificial intelligence engines may be different or the same.
At step 230, standard output data of at least one of the at least one AI engine may be obtained. Illustratively, upon receiving the sample data, the AI engine can derive corresponding result data from its inherent logic functions, which can be returned and processed as standard output data. Taking an identity card OCR engine as an example, when it receives an identity card picture included in the sample data, it can identify the text in the identity card picture and obtain the identified text.
In some embodiments, obtaining standard output data from at least one of the at least one artificial intelligence engines may include, for each artificial intelligence engine, performing the steps of: obtaining an engine output from the artificial intelligence engine, the engine output generated by the artificial intelligence engine based on an engine input; the engine output is mapped to standard output data based on a predefined second mapping mechanism. The second mapping mechanism of each of the at least one artificial intelligence engines may be different or the same.
Alternatively, the steps 220, 230 described above may be implemented via an artificial intelligence engine middlebox (AI engine middlebox) included in the AI engine test platform. In this context, an AI engine midstand may refer to a service platform that provides various AI engine capabilities uniformly to the outside. As shown in fig. 4, the AI engine center can provide a variety of AI engine capabilities externally through standard access protocols, such as face alignment, action living, digital living, silence living, license plate OCR, general OCR, identification card OCR, etc. as shown in the figures. These AI engine capabilities may be provided by AI engines provided by different engine providers that access the stations in the AI engine, such as engine labs 1-7 in the figure.
Fig. 5 schematically illustrates the standard access protocol of the station in the AI engine with the identification card OCR engine as an example. In general, an identification card OCR engine may be used to read text information in an identification card picture. It will be appreciated that for the entry protocol of the identification card OCR, an identification card picture, i.e. imageData in the entry column of fig. 5, may be entered, which may be a predefined picture in one or more formats. However, considering that some OCR engines can automatically recognize the front and back sides of an id card, and some cannot, it may be necessary to input the type of an id card picture, i.e., type in the reference column of fig. 5, to identify whether the input id card picture is the front or the back, or by the engine automatically. For the out-call protocol of the identification card OCR, it may include a text that the engine can read, such as name, gender, ethnicity, birth date, address, identification card number, issuing authority, expiration date, etc. shown in the out-call column of fig. 5. In addition, considering that, for example, the OCR engine of the identity card can automatically identify the front and back sides of the identity card, the identification (type) of the front and back sides of the identity card can be also included in the out-of-reference. For other types of engine capabilities, the standard access protocol for the engine capability for the AI engine center can similarly be set based on the required input information and output information that the engine can provide.
When an AI engine with certain engine capability accesses the AI engine platform, both its input and output can be automatically mapped to the standard access protocol of the AI engine platform. Such mapping may be achieved, for example, by a predefined mapping mechanism, namely the first and second mapping mechanisms described above. For example, when the AI engine is accessed, a script may be configured, and the configuration script may include two core methods: handleReq (request processing method) and handleRsp (response processing method). The former method may map data under a standard access protocol input to the station in the AI engine into data under a data protocol of a corresponding AI engine, and then the station in the AI engine may access the corresponding AI engine accessed with the mapped data. The latter method may map the data output by the accessed AI engine to data under the standard access protocol of the AI engine's middle station, and then the AI engine's middle station may return the data under the standard access protocol to the caller. Thus, for a particular AI engine capability, the caller may be masked from the different protocol formats of the different AI engines, and only need to access the services provided by the different AI engines via the standard access protocol.
On an AI engine midstand, an AI engine capability may exist with at least one AI engine provider, and an AI engine provider may also provide one or more AI engine capabilities. Thus, when a caller attempts to access a service provided by an AI engine through an AI engine center, it is necessary to specify AI engine providers and/or AI engine capabilities to specify exactly which AI engine is to provide the service. Alternatively, the caller may specify only the AI engine capabilities and the scheduler is distributed autonomously by the AI engine middlebox to determine which AI engine to provide the service. Illustratively, an AI engine midstand may be deployed on the AI engine test platform server 110 in the scenario 100 shown in FIG. 1 to access the AI engine deployed on the AI engine server 114 and provide services of the AI engine to local and/or remote users.
At step 240, at least one of the standard output data may be displayed by a display means such as the I/O device 116 or the terminal device 120 in fig. 1, i.e. the user may view at least one of the standard output data by this step. Alternatively, each of the standard output data may be displayed, or a part of the standard output data may be displayed randomly or selectively according to a certain condition.
In some embodiments, displaying at least one of the standard output data may include: receiving a screening operation of a user aiming at standard output data, wherein the screening operation designates screening based on at least one standard of the following standards: the specific error code exists, the standard output data comprises specific parameters, and the standard output data comprises parameters in a specific range; screening at least one standard output data meeting at least one standard; at least one standard output data is displayed. By such screening, it is possible to selectively display the standard output data and the corresponding sample data in which abnormality exists, or to display the standard output data and the corresponding sample data in which an error may occur.
Alternatively, standard output data may be screened for reference to a particular error code or codes. The error code may include a general error code, such as an error code representing an AI engine timeout, a platform internal frame problem, or an error code specific to an AI engine of one or more classes, such as an error code representing that no face is found, light is too strong, light is too weak, there is occlusion, etc., may occur for a face alignment engine, or an error code representing that no person, no definition, etc., may occur for a living class AI engine, for example. For other AI engines, there may be different error codes depending on their characteristics.
Alternatively, standard output data comprising a specific parameter or a specific range of parameters may also be screened. For example, for face comparison, there may be a comparison threshold (e.g., 70), faces with a degree of similarity greater than the comparison threshold will be judged as the same person, faces with a degree of similarity equal to or less than the comparison threshold will be judged as different persons, and it is empirically understood that the judgment result of faces with a degree of similarity in the vicinity of the comparison threshold is most likely to be wrong, so that standard output data including degrees of similarity in the vicinity of the comparison threshold (e.g., 60 to 80) can be screened. Or, for example, when the sample data supplied to the face comparison engine is on-line service data, the faces included in the sample data are highly probable to be the same person, and thus the standard output data including the judgment result of the non-same person can be screened in this case. For other AI engines, there may be other parameter screening schemes depending on their characteristics.
In addition, other screening schemes may exist to meet different user needs.
Illustratively, a user may screen and/or view standard output data through the data detail query interface 600 shown in FIG. 6. For example, the user may screen the standard output data of one or some AI engines by an engine provider, engine capability, engine version, etc., may screen the standard output data corresponding to the sample data to be viewed by an error code or the like that may be generated, and may screen the standard output data to be viewed by a request identification (requestId) and user related information (e.g., app id, uin, etc. as shown in the figure). Further, the user may view the screened data by means of a "search" button, or may present the screened data in the form of, for example, a compressed package, for example, for downloading to a local terminal device, by means of an "upgrade" button. The interface is merely exemplary and standard output data may be screened and/or viewed through other interfaces as desired.
At step 250, a user operation for the displayed standard output data may be received through an input means such as the I/O device 116 or the terminal device 120 in fig. 1. Illustratively, the user's actions may include confirming that the standard output data is correct, correcting some or all of the standard output data, adding additional tags, and so forth. It should be appreciated that the device displaying the standard output data and receiving the user's manipulation of the standard output data may be the same as or different from the device receiving the first user input, and the user viewing the standard output data and performing the associated manipulation may be the same as or different from the user providing the first user input. For example, a user may provide a first user input through the I/O device 116 of FIG. 1 to provide sample data to be annotated to the AI engine, and may view standard output data and perform related operations through the I/O device 116 or through the remote terminal device 120; alternatively, the user may provide the first user input to the AI engine via the remote terminal device 120 to provide sample data to be annotated, and may view the standard output data and perform the relevant operations via the remote terminal device 120 or the I/O device 116. Alternatively, a first user may provide a first user input to the AI engine via the I/O device 116 to provide sample data to be annotated, and a second user may view the standard output data and perform related operations via the remote terminal device 120; alternatively, a first user may provide sample data to be annotated to the AI engine by providing a first user input via the remote terminal device 120, and a second user may view the standard output data and perform related operations via the remote terminal device 120 or the I/O device 116. In summary, the solution of the invention allows various possible combinations of local and/or remote access.
At step 260, annotation information may be generated based on the standard output data and the user's manipulation of the standard output data. It should be understood that "generating" herein may encompass both generating and regenerating (i.e., updating) meaning from scratch.
Fig. 7 and 8 illustrate two example interfaces 700 and 800, respectively, that expose standard output data to a user and allow the user to operate to generate annotation information.
In interface 700, presentation data for a license plate OCR engine is shown. Including sample data, i.e. the input pictures, standard output data, i.e. the output text, and selectable labels and buttons allowing operation. For example, after a user selects a particular tag, the tag may be stored and during subsequent testing, for example, a classification statistical analysis may be performed on test results for samples with different tags to analyze the performance of the AI engine under different conditions. The user may, for example, determine if the text identified by the AI engine is correct by looking at the data in the interface, or may also choose to replay a verification to replay the process by which the corresponding AI engine generated the output text based on the input data, and choose a character callout to alter the characters in the output text. For example, when the user selects the option to identify the right, the output text may be stored as annotation information; when the user selects the identification error, the output text is not stored as the labeling information; when the user selects character annotation, the manually modified text is stored as annotation information.
In interface 800, presentation data of another OCR engine is shown. Including sample data, i.e. a picture on the left, and standard output data, i.e. a recognition result on the left. Illustratively, the user can check through the interface whether the text identified by the AI engine is correct and modify the incorrect portion. After the modification is completed, the modification made may be saved by a save button. The modified text may then be stored as annotation information.
By displaying the standard output data, receiving the operation of the user on the standard output data and generating labeling information based on the standard output data, the user can be allowed to label the sample data based on the standard output data. Unlike the general labeling process, the labeling is performed based on the returned results of the AI engine, so that the time and labor cost required for labeling can be greatly reduced.
At step 270, at least one labeled sample packet may be generated based on the generated labeling information and corresponding sample data, and the labeled sample packet may conform to a particular format. The generated labeled sample packets may be used for subsequent testing tasks. In some embodiments, the annotated sample packet includes at least one of: the sample list comprises at least one of a sample data identifier, a sample data type identifier and a labeling information identifier; a sample data packet comprising a plurality of sample data, each sample data having a sample data identification; the label information package comprises a plurality of label information, and each label information is provided with a label information identifier.
The annotated sample packages may be stored in a local database, or may be stored in a remotely accessed database, for example. Each labeled sample packet may include a plurality of sample data and labeling information corresponding to each sample data, where the labeling information is the correct output information expected to be obtained by the AI engine under test during subsequent testing of the AI engine. For example, for an identity card OCR engine, the sample data may be an identity card picture and the labeling information may be text in the identity card picture. The samples in the sample package may be stored based on the format required by the standard access protocol of the AI engine's desk as described above so as to be readable and processable by it.
Figures 900A-900C illustrate examples of sample packages and sample data and annotation information using an identification card OCR engine as an example. The diagram 900A illustrates the format of one standard sample packet, which may alternatively be in the form of a compressed packet. In this sample package there is a list. Xls file and in the peer directory there is also a picture folder named media and a text folder named info. The diagram 900B shows an example of list file content that includes three columns, photo path, front and back, and annotation information, respectively. Each row in the file may describe one test sample. Because the identity card OCR engine recognizes characters from the identity card pictures, its input should include identity card pictures, each of which may participate in forming a sample. Because the list file, the media folder and the info folder are positioned under the same directory, the name of the identity card picture can be directly filled in the photo path column for one sample, and the file name of the labeling information can be filled in the labeling information column. The corresponding identification card picture may be stored in the media folder and the corresponding annotation information file may be stored in the info folder. Diagram 900C illustrates an example of content in a annotation file that sequentially lists textual information included in a corresponding identification card picture. For example, when the AI engine middle station is reading the sample package, each row in the list file may be read in turn, and the picture sample data and the annotation information in the media folder and the info folder may be obtained according to the photo path and the annotation information.
As shown in fig. 2B, in addition to steps 210-270 in 200A, method 200B includes: receiving a second user input comprising an engine identification to be tested indicating at least one artificial intelligence engine to be tested and a sample package identification indicating at least one sample package of the at least one annotated sample package (step 281); providing sample data in at least one sample packet to at least one artificial intelligence engine to be tested (step 282); obtaining standard output data from at least one artificial intelligence engine under test (step 283); test results are generated based at least in part on the acquired standard output data (step 284).
In step 281, a second user input may be received via a second user interface, such as via interface 300A shown in FIG. 3A or other interface, to establish a test task. Alternatively, the second user interface may be the same or different from the first user interface, and may be deployed on the same device or on different devices. The second user input may specify at least one artificial intelligence engine to be tested and a sample package for testing the at least one artificial intelligence engine, which may be at least one of the annotated sample packages generated by steps 210-270. The interface 300A has been described in detail above and will not be described in detail here. Fig. 3B schematically illustrates an interface 300B, which is an interface 300A that includes user input information. As shown in FIG. 3B, a test task named "evaluate identity card OCR engine effect" is established through a second user input, and the test results can be conveniently checked through the task name later. As can be seen from the figure, the user fills in/selects the engine provider, engine capability, etc., and fills in/selects the sample package label. Alternatively, the user may also choose to perform the test procedure using a default set of sample packages, such as a default sample package provided by the AI engine test platform for the engine capabilities of the AI engine to be tested.
In addition, the user may set the number of requested concurrency when selecting the test task to execute, select whether to send mail, select whether to perform stress testing, etc., and may select or fill out the execution date and execution time of the test task, or the test platform may automatically generate such information or use default settings.
Similar to steps 220, 230, steps 282, 283 may optionally be implemented via an AI engine middlebox included in the AI engine test platform. The relevant content is described in detail above and is not described in detail here.
In step 282, the corresponding sample package may be looked up in, for example, the sample database 112 based on the sample package identification included in the user input, sample data in the sample package is read, and the sample data is provided to at least one AI engine to be tested. In some embodiments, the read sample data may be mapped to engine inputs conforming to the data protocol of the AI engine under test according to AI engine identifications included in the user input, invoking a corresponding predefined first mapping mechanism, such as handleReq in the configuration script described above, and providing the engine inputs to the at least one AI engine under test. Alternatively, when a plurality of AI engines providing the same AI engine capability are tested simultaneously, the read sample data may be mapped into engine inputs that may be input to each of the plurality of AI engines according to the first mapping mechanism of that AI engine.
In step 283, standard output data from the at least one AI engine under test may be obtained. In some embodiments, engine output from at least one AI engine under test may be obtained, the engine output being derived by the AI engine based on the respective engine input. Illustratively, after receiving the engine input, the tested AI engine may obtain corresponding result data according to its internal logic function, and return as an engine output to an AI engine platform included in the AI engine test platform. After the engine output of the AI engine is obtained, the AI engine midstand may map the obtained engine output to standard output data conforming to a standard access protocol of the AI engine midstand for subsequent processing according to a second mapping mechanism, e.g., handleRsp in the configuration script described above. Taking an identity card OCR engine as an example, when it receives an identity card picture from an AI engine center and other related parameters (e.g., parameters related to front and back sides), it can identify the text in the identity card picture and return the identified text in a certain format.
At step 284, test results may be generated based at least in part directly on the engine output. For example, in a stress test process described later, the specific content of the engine output may not be concerned, but only whether the engine output is received or not, and the like. Alternatively, in some embodiments, the test results may be generated based at least in part on standard output data generated by the mapping. For example, in the accuracy test process described later, standard output data and labeling information corresponding to corresponding sample data need to be compared to generate a test result, in which case, the engine output of the AI engine must be mapped first.
To facilitate a better understanding of the role of the AI engine midstand in the AI engine test platform, FIG. 10 shows the flow of the AI engine invocation by the AI engine midstand alone, i.e., steps 220, 230 and optionally 282, 283. In this context, the caller may refer to the AI engine test platform itself. As shown in fig. 10, the left part involves mapping data under the standard access protocol and providing to the AI engine, and the right part involves mapping AI engine output and returning to the caller.
In some embodiments, the AI engine test platform may determine a class of artificial intelligence engines to be tested and select test metrics based on the class thereof. Thus, upon generating test results based at least in part on the engine output, the test metrics may be calculated based at least in part on the engine output. And, optionally, the test results (e.g., test reports) may be mailed or otherwise sent to the user according to user settings or presented directly in the user interface of the AI engine test platform. The test indicators for different types of AI engines may be the same or different. For example, the stress test indicators of different types of AI engines may be the same, and both the identity card OCR and license plate OCR may have indicators of recall, accuracy, etc. However, there may be unique test indexes such as pass rate, false pass rate, etc. of face comparison, and the best threshold of the AI engine may also be automatically calculated according to the test results of the positive and negative samples, for example, when the identified face similarity reaches more or less, it may be considered to belong to the same person.
Illustratively, FIG. 11 shows an example of a test report 1100 of a face alignment engine. As shown, the test report may include information such as the name of the test task, the execution time, the name of the AI engine under test, the provider, the number of positive and negative samples, the name of the sample package, and the calculation results of various test indexes. These test results may be presented in various forms such as tables, graphs, and the like.
Further, in some embodiments, multiple AI engines may be tested simultaneously and test results generated. That is, the engine identification under test in the second user input may indicate a plurality of artificial intelligence engines under test, and when generating test results based at least in part on the obtained standard output data, aggregate test results for the plurality of artificial intelligence engines under test may be generated. Fig. 12A and 12B illustrate, illustratively, aggregate test results for a plurality of AI engines in a histogram 1200A and a line graph 1200B, respectively. In this manner, it may be possible to allow the performance of different AI engines that provide the same engine capabilities to be compared so that a user can more intuitively understand the performance levels of the plurality of AI engines and select an AI engine that better meets his needs.
13-15 in conjunction with example flowcharts of accuracy testing and stress testing, further details of how test results may be generated based at least in part on engine output or standard output data are described.
Fig. 13 schematically illustrates an example flow 1300 of accuracy testing.
In step 1301, it may be determined to begin performing an accuracy test task based on the second user input received in step 281. For example, after the user selects an AI engine or AI engine capability to be tested, the AI engine test platform may automatically determine whether to begin performing an accuracy test task based on a preset program. Alternatively, the user may specify in the input interface to perform an accuracy test task.
In step 1302, corresponding sample packages may be downloaded from a sample package identification, such as from a local or remote database or an online data source; in step 1303, first and second mapping mechanisms of the respective AI engines, such as the configuration scripts described above, may be obtained from the AI engine identifications. The operations pertaining to steps 1302 and 1303 are described in detail in the context of step 281 and are not described in detail herein.
In step 1304, sample data in the loaded sample package may be read, e.g., each sample data may be read sequentially in the order in the list file described above. In step 1305, the read sample data may be mapped to engine inputs conforming to a data protocol of the AI engine under test according to a first mapping mechanism. In step 1306, the mapped engine input may be provided to the AI engine under test. In step 1307, an engine output from the AI engine under test can be obtained. Steps 1304 to 1307 correspond to steps 282 to 283 described with respect to fig. 2A, the specific details of which are described in the relevant paragraphs and are not repeated here. It should be appreciated that steps 1304 through 1307 may be performed for each sample data in the sample packet, and thus, steps 1304 through 1307 may be performed in a loop. However, it should be understood that loop execution herein does not exclude execution in parallel or partially in parallel, i.e. does not exclude execution of steps 1304 to 1307 for a plurality of sample data in parallel or partially in parallel.
In step 1308, the obtained respective engine outputs may be mapped into standard output data conforming to a standard access protocol of the station in the AI engine based on the second mapping mechanism. Alternatively, the mapping may be performed after each engine output is obtained, so as to finally obtain standard output data corresponding to all sample data, that is, corresponding results.
Then, in some embodiments, the standard output data may be compared with the labeling information corresponding to the corresponding sample data, and a test result may be generated based on the comparison result.
Specifically, in step 1309, a test index can be calculated based on the corresponding result (standard output data) of the obtained sample data. The test metrics may be automatically determined by the AI engine test platform based on the type of AI engine to be tested (e.g., engine capabilities), or may be specified by the user. The various types of AI engines may have their own unique test metrics, and the various types of AI engines may have the same test metrics, which may be flexibly set based on specific needs, as the invention is not limited in this respect.
In the following, an example test index in its accuracy test is shown, still taking an AI engine of OCR class as an example:
Sample total: the total number of call AI engine successes, excluding call failed requests, such as engine timeout, 302, 404, etc.;
character accuracy: in the characters appearing on the picture, the ratio of the number of characters correctly recognized by the AI engine to the total number of characters recognized by the AI engine, wherein the correct recognition means that the characters in the standard output data are identical to the characters at the corresponding positions in the corresponding labeling information;
-character recall: the ratio of the number of correctly recognized characters by the AI engine to the total number of characters present on the picture;
-field recall: the ratio of the number of correctly identified fields by the AI engine to the number of samples without anomalies;
-field accuracy: the ratio of the number of correctly identified fields by the AI engine to the number of samples identified by the AI engine;
-recognition rate: the AI engine correctly returns the ratio of the sample number of the identification result to the total number of successful call engines;
-f1score (f 1 score): the weighted average of the accuracy and the recall rate, and the balance factor is 1;
-edit distance: representing that one character string is modified to be consistent with another character string, namely, modifying one character string in the standard output data to be consistent with the corresponding character string in the marking information, and totally requiring the modified character number;
Full-view edit distance: representing that the standard output data and the corresponding marking information are to be modified to be always, and the number of characters to be modified, for example, a picture may comprise a plurality of lines of characters, the sequence of each line of characters in the standard output data is to be consistent with the sequence in the marking information, the AI engine test platform can splice each line of characters in sequence, and calculate an editing distance as a full-picture editing distance;
-minimum edit distance: and taking the text in the labeling information as a reference, respectively calculating the editing distance between each line and each line in the standard output data, then sequencing to obtain the minimum value as the editing distance of the current line, and finally accumulating the editing distances of each line, which is equivalent to neglecting the sequence of the lines in the standard output data and the sequence in the labeling information.
-edit distance rate: average minimum edit distance in number of words of unit.
Subsequently, at step 1310, a test report may be generated for review by a user, such as in the form shown in fig. 11, 12A, 12B, based on the calculated test metrics.
Fig. 14 schematically illustrates an example flow 1400 of pressure testing. During the pressure testing, generating test results based at least in part on the acquired standard output data may include: generating a test result based on at least one of: the number of standard output data acquired, the number of sample data provided, the time at which the standard output data is acquired, the time at which the sample data is provided. The process of the pressure test will be described in detail below.
In step 1401, it may be determined to begin performing a stress test task based on the second user input received in step 281. Illustratively, the user may specify via the user interface whether to perform the stress test task. The execution of the pressure test task may be specified, for example, in the interface 300A or 300B shown in fig. 3A or 3B by checking the option of "whether to press test".
In some embodiments, when a pressure test task needs to be performed, test parameters may be initialized, which may include at least one of: initial test concurrency number, single round test duration, incremental test concurrency number. And, in this case, providing sample data in the at least one sample packet to the at least one artificial intelligence engine to be tested may comprise: engine input is provided to the artificial intelligence engine based on the test parameters.
Specifically, at step 1402, parameters may be initialized. Illustratively, the user may specify parameters such as initial test concurrency, single round test duration, incremental test concurrency through the user interface, and the AI engine test platform may initialize these parameters based on the user's specification. Alternatively, the AI test platform may use default parameter settings to initialize at least one of these parameters. For example, in one stress test task, there may be multiple test runs, and from the test of the first run, the AI engine is tested with an increasing number of concurrences. The concurrency number may refer to the number of requests sent simultaneously to the AI engine, for example, the number of sample data sent simultaneously to the AI engine. Herein, "simultaneous" may be understood as a certain short period of time, e.g. 1 second, 100 milliseconds, etc.
In this context, the initial test concurrency number may refer to the number of requests simultaneously sent to the AI engine under test in the first round of pressure testing; the single-pass test duration may refer to the duration of one test pass; the incremental test concurrency number may refer to an amount by which the concurrency number of the next round is increased from the concurrency number of the current round. In general, the initial test concurrency should be less than the query rate per second (qps) of the AI engine to be tested, followed by a gradual increase in concurrency; the single round test duration may be selected empirically, which sets longer, the result of the pressure test may be more accurate, but correspondingly, the time spent may be longer; the number of incremental test complications can also be selected empirically, and the finer the granularity of the settings, the more accurate the results of the pressure test can be, but again the time spent will be correspondingly longer.
At step 1403, a request may be continuously initiated to the AI engine based on the initialized test parameters. For example, after starting the stress test task, in the first round of testing, a request may be initiated to the AI engine under test, i.e., sample data may be sent, in an initial number of test concurrences. The sample data may be obtained in the same manner as the sample data described above with respect to fig. 2A, and will not be described herein.
At step 1404, results returned from the AI engine under test may be obtained, which may include output data generated by the AI engine based on engine inputs, as well as other error conditions including engine timeouts, 302, 404, etc. For example, the results returned from the AI engine under test may be obtained and analyzed over a single test duration.
In step 1405, it is determined whether to end the present pressure test task based on the analysis result. In some embodiments, the test may be ended upon detection of a test ending condition, which may include at least one of:
the number of test runs reaches a number of runs threshold, which may be set to 50, for example, to avoid the parameter chosen when creating the task being more extreme, resulting in a too long test time;
in the most recent test run, the average of the delays of the artificial intelligence engine for the plurality of sample data is greater than or equal to a delay threshold, the delay being the time difference between the time the engine input is provided and the time the corresponding engine output is obtained, the delay threshold may be set, for example, to 5000ms, when the average of the delays of the AI engine for the plurality of sample data exceeds the delay threshold, indicating that the request is heavily stocked in the queue, in fact the service capacity of the AI engine has been exceeded;
-in the most recent test run, the response rate of the artificial intelligence engine is less than a response rate threshold, the response rate being the ratio of the number of engine outputs obtained to the number of engine inputs provided, the response rate threshold may be set to, for example, 70%, and when the response rate of the AI engine is less than the response rate threshold, there may be a timeout or other anomaly;
-the magnitude of the change in the query rate per second (qps) of the artificial intelligence engine is below the magnitude threshold in the most recent threshold number of test runs, the query rate per second being the ratio of the number of engine outputs obtained to the time difference between providing the first engine input and obtaining the last engine output, when the qps change in the AI engine is small in the most recent threshold number (e.g. 5) of test runs, indicating that qps has tended to stabilize;
the query rate per second of the artificial intelligence engine continuously decreases during the last threshold number of test rounds, and when the AI engine qps has a decreasing trend during the last threshold number of test rounds (e.g. 3), it is indicated that the initiated request volume has exceeded the AI engine's service capacity, resulting in a decrease in service quality.
If in step 1405 the above test end condition is not detected, then it may proceed to step 1406. In step 1406, the incremental test concurrency number may be increased based on the current test concurrency number as the test concurrency number for the next round.
If in step 1405 a certain test end condition is detected, then it may proceed to step 1407. In step 1407, the data in the overall stress test task may be analyzed to obtain a test result. Illustratively, the maximum qps of the AI engine may be derived from the test data for each round, such as the maximum of qps for each test round. Furthermore, the average delay of the requests in the round corresponding to the maximum qps may also be calculated. Alternatively, a line graph may be generated from test data for each round, such as line graph 1500 depicted in fig. 15, showing corresponding qps and average response times (i.e., average delays) for a concurrence number of 30 to 70, and showing that for this AI engine under test, the maximum qps may be determined to be 60, where the average response time is 488ms, with the average response time increasing dramatically as the request continues to increase.
It should be understood that the above description of the process of accuracy and pressure testing is merely exemplary, and in particular, the various parameters referred to therein are merely illustrative and the invention is not limited to the specifically illustrated examples. In addition, the AI test platform may perform other types of tests on the AI engine as needed, such as a robustness test, and the like, and is not limited to the accuracy test or the stress test described above.
In addition, in some embodiments, after the test task is performed, the user may further view standard output data obtained based on the sample data in the sample packet for testing via the user interface of the AI engine test platform, and perform operations such as confirmation, modification, and the like on the standard output data, and then, the AI engine test platform may update the labeling information of the corresponding sample data based on the standard output data and the operations of the user on the data. That is, the user may update the annotation information for the corresponding sample data based on the standard output data during the test task according to steps 210-270. This may help to further improve the accuracy of the labeling information for the labeled sample packets.
FIG. 16 schematically illustrates an example block diagram of an AI engine test platform 1600 in accordance with some embodiments of the invention. As shown, the AI engine test platform 1600 can include a first user interface 1610, a providing module 1620, an obtaining module 1630, a display module 1640, a second user interface 1650, a generating module 1660, and a forming module 1670.
In particular, the first user interface 1610 may be configured to receive a first user input comprising an artificial intelligence engine identification indicating at least one artificial intelligence engine and a sample data identification indicating at least one sample data for provision to the at least one artificial intelligence engine; the providing module 1620 may be configured to provide at least one sample data to at least one artificial intelligence engine; the acquisition module 1630 may be configured to acquire standard output data from at least one of the at least one artificial intelligence engines, the standard output data including data generated by the artificial intelligence engine based on the respective sample data; the display module 1640 may be configured to display at least one of the standard output data and the corresponding sample data; the second user interface 1650 may be configured to receive user operations for outputting data for at least one criterion; the generation module 1660 may be configured to generate annotation information corresponding to the sample data based on the at least one standard output data and the operation; the forming module 1670 may be configured to form at least one annotated sample packet based at least in part on the generated annotation information and the corresponding sample data.
The AI engine test platform 1600 can be deployed on the AI engine test platform server 110 shown in FIG. 1, or a combination of the AI engine test platform server 110 and the I/O devices 116 or the terminal devices 120. It should be appreciated that the AI engine test platform 1600 may be implemented in software, hardware, or a combination of software and hardware. The different modules may be implemented in the same software or hardware structure or one module may be implemented by different software or hardware structures.
In addition, the AI engine test platform 1600 can be used to implement the AI engine test methods described above, with the relevant details already described in detail above and not repeated here for brevity. The AI engine test platform 1600 may have the same features and advantages as described with respect to AI engine test methods.
FIG. 17 schematically illustrates an example block diagram of a terminal 1700 for artificial intelligence engine testing in accordance with some embodiments of the invention. As shown, the terminal 1700 includes an input interface 1710, a processor 1720, and a display device 1730.
In particular, the input interface 1710 may be configured to receive input data including an engine identification to be tested and a sample packet identification; processor 1720 may be configured to provide sample data in at least one sample packet to at least one artificial intelligence engine to be tested to obtain standard output data from the at least one artificial intelligence engine to be tested; the display device 1730 may be configured to compare and display a summary test result for the visualization of the at least one artificial intelligence engine to be tested generated based on the standard output data, wherein when the at least one artificial intelligence engine to be tested includes different kinds of engines, different evaluation index results are displayed according to the different kinds of engines in the summary test result.
It will be appreciated that for each artificial intelligence engine to be tested, its testing process may be performed as described above with respect to steps 281-284 in FIG. 2B, which are described in detail above and not further described herein. The test results for each artificial intelligence engine to be tested may then be displayed in aggregate, for example, in various visual forms such as text, tables, graphs, histograms, pie charts, and the like. For example, the aggregate test results may be as shown in fig. 12A, 12B.
Fig. 18 illustrates a block diagram of a computing device 1800, according to some embodiments of the invention. Computing device 1800 can be a variety of different types of devices, such as a server computer, a device associated with a client (e.g., a client device), a system-on-a-chip, and/or any other suitable computing device or computing system. For example, it may represent the AI engine test platform server 110 of fig. 1, or a combination of the AI engine test platform server 110 and the I/O device 116 or the terminal device 120.
The computing device 1800 may include at least one processor 1802, a memory 1804, communication interface(s) 1806, a display device 1808, other input/output (I/O) devices 1810, and at least one mass storage device 1812, capable of communicating with each other, such as through a system bus 1814 or other suitable connection.
The processor 1802 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. The processor 1802 may be implemented as at least one microprocessor, microcomputer, microcontroller, digital signal processor, central processing unit, state machine, logic circuitry, and/or any device that manipulates signals based on operational instructions. The processor 1802 can be configured to, among other capabilities, obtain and execute computer-readable instructions stored in the memory 1804, mass storage device 1812, or other computer-readable medium, such as program code of the operating system 1816, program code of the application programs 1818, program code of other programs 1820, etc., to implement AI engine testing methods provided by embodiments of the present invention.
The memory 1804 and mass storage device 1812 are examples of computer storage media for storing instructions that are executed by the processor 1802 to implement the various functions as previously described. For example, the memory 1804 may generally include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, the mass storage device 1812 may generally include hard disk drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and the like. The memory 1804 and the mass storage device 1812 may both be referred to herein as memory or computer storage media, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by the processor 1802 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of program modules may be stored on the mass storage device 1812. These programs include an operating system 1816, at least one application program 1818, other programs 1820, and program data 1822, and may be loaded into the memory 1804 for execution. Examples of such application programs or program modules may include, for example, computer program logic (e.g., computer program code or instructions) for implementing the modules shown in fig. 16.
Although illustrated in fig. 18 as being stored in the memory 1804 of the computing device 1800, the modules 1816, 1818, 1820, and 1822, or portions thereof, may be implemented using any form of computer readable media accessible by the computing device 1800. As used herein, "computer-readable medium" includes at least two types of computer-readable media, namely computer storage media and communication media.
Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism. Computer storage media as defined herein do not include communication media.
The computing device 1800 may also include one or more communication interfaces 1806 for exchanging data with other devices, such as via a network, direct connection, or the like, as previously discussed. The one or more communication interfaces 1806 may be examples of interfaces, such as the AI engine test platform server 110 communicates with the AI engine server 114, the remote terminal device 120, and so forth. The communication interface 1806 may facilitate communication over a variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the internet, and so forth. The communication interface 1806 may also provide communication with external storage devices (not shown) such as in a storage array, network attached storage, storage area network, or the like.
In some examples, a display device 1808, such as a monitor, may be included for displaying information and images. Other I/O devices 1810 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so on.
Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (13)

1. An artificial intelligence engine testing method comprising:
receiving a first user input, the first user input comprising an artificial intelligence engine identification and a sample data identification, the artificial intelligence engine identification comprising a first identification and a second identification, the first identification indicating an engine class of an artificial intelligence engine, the second identification indicating at least one artificial intelligence engine belonging to the engine class, the sample data identification indicating at least one sample data for provision to the at least one artificial intelligence engine, each sample data in the at least one sample data having a standard parametrical format corresponding to the engine class;
For each of the at least one artificial intelligence engine, mapping the at least one sample data into an engine input usable by the artificial intelligence engine based on a predefined first mapping mechanism corresponding to the artificial intelligence engine, and providing the engine input to the artificial intelligence engine;
for each of the at least one artificial intelligence engine, obtaining an engine output from the artificial intelligence engine and mapping the engine output to standard output data based on a predefined second mapping mechanism corresponding to the artificial intelligence engine, wherein the engine output is generated by the artificial intelligence engine based on the engine input, the standard output data having a standard out-of-reference format corresponding to the engine class;
displaying at least one standard output data and corresponding sample data in the standard output data;
receiving an operation of a user for outputting data according to the at least one standard;
generating labeling information of corresponding sample data based on the at least one standard output data and the operation;
forming at least one annotated sample packet for the engine class based on the generated annotation information and corresponding sample data;
In response to a second user input, testing at least one artificial intelligence engine to be tested using at least one of the at least one annotated sample packet and generating a test result, the at least one artificial intelligence engine to be tested belonging to the engine class, the test result comprising a test index corresponding to the engine class.
2. The method of claim 1, wherein the displaying at least one of the standard output data comprises:
receiving a screening operation of a user for the standard output data, wherein the screening operation designates that the standard output data is screened based on at least one of the following standards: the specific error code exists, the standard output data comprises specific parameters, and the standard output data comprises parameters in a specific range;
screening out at least one standard output data meeting the at least one standard;
displaying the at least one standard output data.
3. The method of claim 1, wherein the annotated sample packet comprises at least one of:
the sample list comprises at least one of a sample data identifier, a sample data type identifier and a labeling information identifier;
A sample data packet comprising a plurality of sample data, each sample data having a sample data identification;
the label information package comprises a plurality of label information, and each label information is provided with a label information identifier.
4. The method of claim 1, wherein the testing at least one artificial intelligence engine to be tested using at least one of the at least one labeled sample package and generating test results in response to a second user input comprises:
receiving the second user input, the second user input comprising an engine identification to be tested indicating the at least one artificial intelligence engine to be tested and a sample package identification indicating the at least one sample package of the at least one labeled sample package;
providing sample data in the at least one sample packet to the at least one artificial intelligence engine to be tested;
obtaining standard output data from the at least one artificial intelligence engine to be tested;
the test results are generated based at least in part on the acquired standard output data.
5. The method of claim 4, wherein the generating the test result based at least in part on the obtained standard output data comprises:
And comparing the obtained standard output data with the labeling information corresponding to the corresponding sample data, and generating the test result based on the comparison result.
6. The method of claim 4, wherein the generating the test result based at least in part on the obtained standard output data comprises:
generating the test result based on at least one of: the number of standard output data acquired, the number of sample data provided, the time at which the standard output data is acquired, the time at which the sample data is provided.
7. The method of claim 6, further comprising:
initializing test parameters, the test parameters including at least one of: initial test concurrency number, single round test duration, incremental test concurrency number,
wherein said providing sample data in said at least one sample packet to said at least one artificial intelligence engine to be tested comprises: the engine input is provided to the artificial intelligence engine based on the test parameters.
8. The method of claim 7, further comprising:
ending the test upon detection of a test ending condition, the test ending condition comprising at least one of:
The number of the test rounds reaches a threshold value of the number of the rounds;
in the latest test round, the average value of time delay of the artificial intelligence engine for a plurality of sample data is larger than or equal to a time delay threshold value, wherein the time delay is a time difference between the time of providing the sample data and the time of acquiring corresponding standard output data;
in the latest test round, the response rate of the artificial intelligence engine is smaller than a response rate threshold value, wherein the response rate is the ratio of the number of acquired standard output data to the number of provided sample data;
in the test rounds of the latest threshold times, the change amplitude of the query rate per second of the artificial intelligence engine is lower than an amplitude threshold, wherein the query rate per second is the ratio of the number of acquired standard output data to the time difference between providing first sample data and acquiring last standard output data;
the query rate per second of the artificial intelligence engine decreases continuously during the most recent threshold number of test rounds.
9. The method of claim 4, wherein the engine identification to be tested indicates a plurality of artificial intelligence engines to be tested, and
wherein the generating test results based at least in part on the acquired standard output data comprises: and generating summarized test results for the plurality of artificial intelligence engines to be tested.
10. An artificial intelligence engine test platform comprising:
a first user interface configured to receive a first user input comprising an artificial intelligence engine identification and a sample data identification, the artificial intelligence engine identification comprising a first identification indicating an engine class of an artificial intelligence engine and a second identification indicating at least one artificial intelligence engine belonging to the engine class, the sample data identification indicating at least one sample data for provision to the at least one artificial intelligence engine, each of the at least one sample data having a standard parametrical format corresponding to the engine class;
a providing module configured to map, for each artificial intelligence engine, the at least one sample data into an engine input usable by the artificial intelligence engine based on a predefined first mapping mechanism corresponding to the artificial intelligence engine, and to provide the engine input to the artificial intelligence engine;
an acquisition module configured to acquire, for each of the at least one artificial intelligence engine, an engine output from the artificial intelligence engine and map the engine output to standard output data based on a predefined second mapping mechanism corresponding to the artificial intelligence engine, wherein the engine output is generated by the artificial intelligence engine based on the engine input, the standard output data having a standard parameter-out format corresponding to the engine class;
A display module configured to display at least one of the standard output data and corresponding sample data;
a second user interface configured to receive a user operation for outputting data for the at least one criterion;
a generation module configured to generate annotation information for the corresponding sample data based on the at least one standard output data and the operation;
a forming module configured to form at least one annotated sample packet for the engine class based at least in part on the generated annotation information and corresponding sample data;
a test module configured to: in response to a second user input, testing at least one artificial intelligence engine to be tested using at least one of the at least one annotated sample packet and generating a test result, the at least one artificial intelligence engine to be tested belonging to the engine class, the test result comprising a test index corresponding to the engine class.
11. A terminal for artificial intelligence engine testing, comprising:
an input interface for receiving input data, the input data comprising an engine identity to be tested and a sample packet identity, the engine identity to be tested comprising a first identity and a second identity, the first identity indicating at least one engine class of an engine to be tested, the second identity indicating at least one artificial intelligence engine belonging to the at least one engine class, the sample packet identity indicating at least one sample packet, each sample packet of the at least one sample packet for one of the at least one engine class, and wherein each sample data has a standard parametrical format corresponding to the engine class;
A processor for providing sample data in at least one sample packet to the at least one artificial intelligence engine to be tested to obtain standard output data from the at least one artificial intelligence engine to be tested;
the display device is used for comparing and displaying visual summarized test results which are generated based on the standard output data and are specific to the at least one artificial intelligence engine to be tested, when the at least one artificial intelligence engine to be tested contains artificial intelligence engines belonging to different engine types, different evaluation index results are displayed in the summarized test results according to the different engine types, and the processor is further configured to: for each artificial intelligence engine to be tested, mapping the sample data into engine inputs usable by the artificial intelligence engine to be tested based on a first mapping mechanism defined in advance, providing the engine inputs to the artificial intelligence engine to be tested, and obtaining engine outputs from the artificial intelligence engine and mapping the engine outputs into standard output data based on a second mapping mechanism defined in advance, wherein the engine outputs are generated by the artificial intelligence engine based on the engine inputs, the standard output data having a standard out-of-reference format corresponding to an engine class of the artificial intelligence engine.
12. A computing device comprising a memory and a processor, the memory configured to store thereon computer-executable instructions that, when executed on the processor, perform the method of any of claims 1-9.
13. A computer readable storage medium having stored thereon computer executable instructions which, when executed on a processor, perform the method of any of claims 1-9.
CN202010933841.3A 2020-09-08 2020-09-08 Artificial intelligence engine testing method, platform, terminal, computing device and storage medium Active CN112817839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010933841.3A CN112817839B (en) 2020-09-08 2020-09-08 Artificial intelligence engine testing method, platform, terminal, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010933841.3A CN112817839B (en) 2020-09-08 2020-09-08 Artificial intelligence engine testing method, platform, terminal, computing device and storage medium

Publications (2)

Publication Number Publication Date
CN112817839A CN112817839A (en) 2021-05-18
CN112817839B true CN112817839B (en) 2024-03-12

Family

ID=75853129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010933841.3A Active CN112817839B (en) 2020-09-08 2020-09-08 Artificial intelligence engine testing method, platform, terminal, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN112817839B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997762A (en) * 2017-03-08 2017-08-01 广东美的制冷设备有限公司 The sound control method and device of household electrical appliance
CN107292154A (en) * 2017-06-09 2017-10-24 北京奇安信科技有限公司 A kind of terminal feature recognition methods and system
CN107368410A (en) * 2017-06-14 2017-11-21 腾讯科技(深圳)有限公司 The performance test methods and device of game engine, storage medium and electronic installation
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109947651A (en) * 2019-03-21 2019-06-28 上海智臻智能网络科技股份有限公司 Artificial intelligence engine optimization method and device
CN110210294A (en) * 2019-04-23 2019-09-06 平安科技(深圳)有限公司 Evaluation method, device, storage medium and the computer equipment of Optimized model
CN110598115A (en) * 2019-09-18 2019-12-20 北京市博汇科技股份有限公司 Sensitive webpage identification method and system based on artificial intelligence multi-engine
CN110782998A (en) * 2019-10-12 2020-02-11 平安医疗健康管理股份有限公司 Data auditing method and device, computer equipment and storage medium
CN110798357A (en) * 2019-11-05 2020-02-14 上海景域文化传播股份有限公司 API communication device and method based on ticket S-GDS data mapping protocol
CN110796270A (en) * 2019-10-25 2020-02-14 深圳市超算科技开发有限公司 Machine learning model selection method
CN110826908A (en) * 2019-11-05 2020-02-21 北京推想科技有限公司 Evaluation method and device for artificial intelligent prediction, storage medium and electronic equipment
CN111598099A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Method and device for testing image text recognition performance, testing equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11188838B2 (en) * 2018-01-30 2021-11-30 Salesforce.Com, Inc. Dynamic access of artificial intelligence engine in a cloud computing architecture
US11301951B2 (en) * 2018-03-15 2022-04-12 The Calany Holding S. À R.L. Game engine and artificial intelligence engine on a chip

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997762A (en) * 2017-03-08 2017-08-01 广东美的制冷设备有限公司 The sound control method and device of household electrical appliance
CN107292154A (en) * 2017-06-09 2017-10-24 北京奇安信科技有限公司 A kind of terminal feature recognition methods and system
CN107368410A (en) * 2017-06-14 2017-11-21 腾讯科技(深圳)有限公司 The performance test methods and device of game engine, storage medium and electronic installation
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109947651A (en) * 2019-03-21 2019-06-28 上海智臻智能网络科技股份有限公司 Artificial intelligence engine optimization method and device
CN110210294A (en) * 2019-04-23 2019-09-06 平安科技(深圳)有限公司 Evaluation method, device, storage medium and the computer equipment of Optimized model
CN110598115A (en) * 2019-09-18 2019-12-20 北京市博汇科技股份有限公司 Sensitive webpage identification method and system based on artificial intelligence multi-engine
CN110782998A (en) * 2019-10-12 2020-02-11 平安医疗健康管理股份有限公司 Data auditing method and device, computer equipment and storage medium
CN110796270A (en) * 2019-10-25 2020-02-14 深圳市超算科技开发有限公司 Machine learning model selection method
CN110798357A (en) * 2019-11-05 2020-02-14 上海景域文化传播股份有限公司 API communication device and method based on ticket S-GDS data mapping protocol
CN110826908A (en) * 2019-11-05 2020-02-21 北京推想科技有限公司 Evaluation method and device for artificial intelligent prediction, storage medium and electronic equipment
CN111598099A (en) * 2020-05-18 2020-08-28 腾讯科技(深圳)有限公司 Method and device for testing image text recognition performance, testing equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Making Sense of Pharmacovigilance and Drug Adverse Event Reporting: Comparative Similarity Association Analysis Using AI Machine Learning Algorithms in Dogs and Cats;Xuan Xu PhD et al.;ScienceDirect;20191231;第37卷;全文 *
何俊著.电子政务项目案例分析 业务需求驱动的政务信息化实践之路.云南大学出版社,2018,第244-247页. *
陈爽.多变化人脸图像识别技术的研究与实现.《中国优秀硕士学位论文全文数据库信息科技辑》.2015,(第07期),全文. *

Also Published As

Publication number Publication date
CN112817839A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN109684047A (en) Event-handling method, device, equipment and computer storage medium
US20170099560A1 (en) System, a method and a computer program product for automated remote control
CN109446071A (en) Interface test method, interface test device, electronic equipment and storage medium
CN109308490B (en) Method and apparatus for generating information
CN109145828B (en) Method and apparatus for generating video category detection model
CN110321273A (en) A kind of business statistical method and device
CN111401722B (en) Intelligent decision method and intelligent decision system
CN110276074B (en) Distributed training method, device, equipment and storage medium for natural language processing
CN113515453B (en) Webpage testing system
CN109408375A (en) The generation method and device of interface document
WO2019100635A1 (en) Editing method and apparatus for automated test script, terminal device and storage medium
CN111815169A (en) Business approval parameter configuration method and device
CN111814759B (en) Method and device for acquiring face quality label value, server and storage medium
CN111352836A (en) Pressure testing method and related device
CN111930614B (en) Automatic testing method, device, equipment and medium
US20230036072A1 (en) AI-Based Method and System for Testing Chatbots
CN113821254A (en) Interface data processing method, device, storage medium and equipment
CN112817839B (en) Artificial intelligence engine testing method, platform, terminal, computing device and storage medium
US20230072123A1 (en) Method and system for automating analysis of log data files
CN112182413B (en) Intelligent recommendation method and server based on big teaching data
US10885343B1 (en) Repairing missing frames in recorded video with machine learning
CN113886221A (en) Test script generation method and device, storage medium and electronic equipment
CN110750727A (en) Data processing method, device, system and computer readable storage medium
CN113204654B (en) Data recommendation method, device, server and storage medium
CN112817635B (en) Model processing method and data processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40048351

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant