CN116974898A - Data processing method, device, equipment and computer readable storage medium - Google Patents

Data processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN116974898A
CN116974898A CN202310096599.2A CN202310096599A CN116974898A CN 116974898 A CN116974898 A CN 116974898A CN 202310096599 A CN202310096599 A CN 202310096599A CN 116974898 A CN116974898 A CN 116974898A
Authority
CN
China
Prior art keywords
index
model
result
reasoning
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310096599.2A
Other languages
Chinese (zh)
Inventor
赖文星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310096599.2A priority Critical patent/CN116974898A/en
Publication of CN116974898A publication Critical patent/CN116974898A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs
    • G06F11/3612Software analysis for verifying properties of programs by runtime analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data processing method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a first model and an evaluation data set according to the first model evaluation request; running an inference code, generating an inference result of the first model aiming at the evaluation data set, and generating a parameter list comprising a first value of the dynamic parameter; running a first index generation code to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation; acquiring a second value according to the parameter updating request aiming at the first value; and running a first index generation code based on the decoupling relation by utilizing the reasoning result to generate a second index related to the reasoning result and the second value. By adopting the method and the device, the waste of computing resources can be reduced, and the generation efficiency of the second index can be improved. The embodiment of the application can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like.

Description

Data processing method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a data processing method, apparatus, device, and computer readable storage medium.
Background
The model evaluation is an indispensable loop in the machine learning model period, can measure the effect of each aspect of the model, and plays a decisive role in whether the model can be finally used for reasoning deployment. The model evaluation mainly comprises three steps: 1. reasoning the model by using the evaluation data set and obtaining a result; 2. comparing the reasoning result with the label (namely the correct result) of the evaluation data set, and calculating an index; 3. and reading and displaying the index.
In the prior art, each reasoning scene has a set of coupled evaluation codes, and the processes of model reasoning, index calculation and index display are defined. When evaluating the model, a model evaluator needs to set corresponding parameters in advance, for example, a classification scene comprises a parameter confidence threshold and an intersection ratio threshold, the intersection ratio threshold is set to be 0.7 when evaluating for the first time, the confidence threshold is set to be 0.8, and then the whole set of evaluation codes are operated to respectively obtain an inference result and a first index. The evaluation parameters are set in advance, the situation that the existing parameters do not meet expectations is unavoidable, if the confidence threshold is changed from 0.8 to 0.75, a second evaluation task of the model is started, and at the moment, the whole set of evaluation codes are rerun to obtain an inference result and a second index respectively. Obviously, in the prior art, each time the model is evaluated, the whole set of evaluation code is required to be run to obtain the reasoning result of the model, and then the index is obtained by combining the parameters set in advance, but the reasoning process of the model consumes a large amount of computing resources, and the efficiency of index generation is reduced.
Disclosure of Invention
The embodiment of the application provides a data processing method, a device, equipment and a computer readable storage medium, which can not only reduce the waste of computing resources, but also improve the generation efficiency of a second index.
In one aspect, an embodiment of the present application provides a data processing method, including:
acquiring a first model evaluation request for requesting to evaluate a first model, acquiring the first model according to the first model evaluation request, and evaluating a data set for evaluating the first model;
running an inference code associated with the model type of the first model, generating an inference result of the first model for the evaluation dataset, and generating a parameter list associated with the inference result; the parameter list comprises a first value of the dynamic parameter;
operating a first index generation code associated with the first model by utilizing the reasoning result to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation;
acquiring a second value for updating the first value according to the parameter updating request aiming at the first value;
operating a first index generation code based on the decoupling relation by utilizing the reasoning result to generate a second index related to the reasoning result and the second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
In one aspect, an embodiment of the present application provides a data processing apparatus, including:
the first acquisition module is used for acquiring a first model evaluation request for requesting to evaluate the first model, acquiring the first model according to the first model evaluation request and evaluating an evaluation data set of the first model;
the first generation module is used for running an inference code associated with the model type of the first model, generating an inference result of the first model aiming at the evaluation data set, and generating a parameter list associated with the inference result; the parameter list comprises a first value of the dynamic parameter;
the second generation module is used for running a first index generation code associated with the first model by utilizing the reasoning result to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation;
the second acquisition module is used for acquiring a second value for updating the first value according to the parameter updating request aiming at the first value;
the third generation module is used for utilizing the reasoning result based on the decoupling relation, running the first index generation code and generating a second index related to the reasoning result and the second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
Wherein, the data processing device still includes:
the third acquisition module is used for acquiring a second model evaluation request for requesting to evaluate a second model and acquiring the second model according to the second model evaluation request;
the third acquisition module is further used for acquiring an inference code from the inference code set if the model type of the second model is the same as the model type of the first model;
the fourth acquisition module is used for acquiring a second index generation code associated with the reasoning scene of the second model in the index generation code set if the reasoning scene of the second model is different from the reasoning scene of the first model; the reasoning scene of the second model belongs to the reasoning scene under the model type of the first model; the first index generating code and the second index generating code are the same, and the first index generating code refers to a code associated with an inference scene of the first model in the index generating code set;
the fourth generation module is used for generating codes according to the reasoning codes and the second indexes and generating third indexes aiming at the second model; the third index is used to indicate the model reasoning capabilities of the second model.
Wherein, the first generation module includes:
A first acquisition unit configured to acquire a first parameter generation code associated with an inference scene of a first model;
the first operation unit is used for taking the reasoning result and the evaluation data set as first input parameters of the first parameter generation code if the first model evaluation request does not carry the newly added parameter name, and operating the first parameter generation code containing the first input parameters to generate a parameter list;
the parameter adding unit is used for adding the newly added parameter name into the first parameter generating code if the first model evaluating request carries the newly added parameter name, so as to obtain a second parameter generating code aiming at the first model;
and the second operation unit is used for taking the reasoning result and the evaluation data set as second input parameters of the second parameter generation code, operating the second parameter generation code containing the second input parameters and generating a parameter list.
Wherein, the second generation module includes:
a first generation unit configured to generate a first index generation code associated with the first model;
the third operation unit is used for taking the reasoning result and the parameter list as third input parameters of the first index generation code, operating the first index generation code containing the third input parameters and generating a prediction result;
And the first determining unit is used for determining a result error between the predicted result and a correct result in the evaluation data set, and determining a first index according to the result error.
Wherein the first generation unit includes:
a first obtaining subunit, configured to obtain, in the index generating code set, an initial index generating code associated with an inference scene of the first model;
the first determining subunit is configured to determine the initial index generating code as a first index generating code if the first model evaluating request does not carry the new index name;
and the second determining subunit is used for generating a new index generating code aiming at the new index name if the first model evaluating request carries the new index name, and determining the new index generating code and the initial index generating code as the first index generating code.
Wherein, the second generation module further includes:
the second generation unit is used for acquiring the index integral display type and generating integral indexes corresponding to the first indexes through the index integral display type; the overall index comprises an index name corresponding to the first index and an index value corresponding to the first index;
the third generation unit is used for acquiring the index advanced display type and generating an advanced index corresponding to the first index through the index advanced display type; the advanced index is used for indicating the index displayed in a chart mode, and comprises index names corresponding to each evaluation type in the evaluation data set and index values corresponding to each evaluation type; the index name corresponding to each evaluation type and the index value corresponding to each evaluation type belong to the first index;
And the association storage unit is used for carrying out association storage on the integral index and the advanced index.
Wherein, the data processing device still includes:
a fifth acquisition module, configured to acquire an inference result processing code associated with the inference scenario of the first model, and take the inference result and the parameter list as a fourth input parameter of the inference result processing code;
the fifth generation module is used for running an reasoning result processing code containing the fourth input parameter and generating attribute information corresponding to the reasoning result and a reasoning conclusion corresponding to the reasoning result;
the result determining module is used for determining the reasoning conclusion, the first index and the prediction result as evaluation results; the prediction result is generated based on the reasoning result and the parameter list;
the index establishing module is used for establishing an index relation between the attribute information and the evaluation result, and storing the attribute information with the index relation and the evaluation result in an evaluation task file corresponding to the first model; the evaluation task file is used for returning a target evaluation result associated with key query information carried by the evaluation result query request in the evaluation result when the evaluation result query request aiming at the first model is obtained.
Wherein, the data processing device still includes:
the sixth acquisition module is used for acquiring an evaluation result query request aiming at the first model; the evaluating result query request carries key query information;
the information matching module is used for matching the key query information with the attribute information according to the query request of the evaluation result;
and the information matching module is also used for acquiring a target evaluation result with an index relation with the target attribute information from the evaluation results if the target attribute information in the attribute information is matched with the key query information.
The number of the first index generation codes is A, and A is a positive integer; the A first index generating codes comprise first index generating codes B c The method comprises the steps of carrying out a first treatment on the surface of the c is a positive integer, and c is less than or equal to A; the number of the first indexes is A; the A first indexes comprise running first index generating codes B c The first index E is generated c
A third generation module, comprising:
a second acquisition unit for acquiring the first index generation code B c Influence parameter name list D c The method comprises the steps of carrying out a first treatment on the surface of the Influence parameter name list D c Includes means for generating a first index E c Parameter names of (2);
a first matching unit for matching the influence parameter name list D c The parameter name is matched with the second value;
A fourth generation unit for generating a list D of parameter names if the parameter names are affected c The parameter names matched with the second value exist in the included parameter names, and the reasoning result and the second value first index are used for generating a code B through a decoupling relation c As a fifth input parameter;
a fourth generation unit for executing a first index generation code B containing a fifth input parameter c Generating a prediction result F c For the predicted result F c And comparing the correct results in the evaluation data set to obtain a second index.
Wherein, the third generation module includes:
the second matching unit is used for matching the second valued parameter name and the reasoning result name of the reasoning result through the decoupling relation;
the third acquisition unit is used for acquiring a local reasoning result name matched with the second valued parameter name in the reasoning result names, and acquiring a local reasoning result indicated by the local reasoning result name in the reasoning result;
and the fifth generation unit is used for taking the local reasoning result and the second value as a sixth input parameter of the first index generation code, running the first index generation code containing the sixth input parameter and generating a second index.
Wherein, the third generation module includes:
The second determining unit is used for responding to the parameter updating request in the main service and determining a first index generating code and an reasoning result through a decoupling relation;
a sixth generation unit, configured to generate an index regeneration request according to the second value, the first index generation code, and the inference result, and send the index regeneration request to the affiliated service;
and a seventh generating unit, configured to execute the first index generating code according to the index regeneration request in the affiliated service, and generate a second index associated with the reasoning result and the second value.
Wherein the affiliated service includes at least two computing components;
a sixth generation unit including:
the second obtaining subunit is used for determining component states corresponding to at least two computing components respectively, obtaining the computing components with the component states being component starting states in the at least two computing components, and generating a starting computing component set from the obtained computing components;
a third determining subunit, configured to determine G wait queue lengths of computing components in the set of starting computing components, and obtain a minimum wait queue length from the G wait queue lengths; starting a computing component in the computing component set to correspond to a waiting queue length; g is a positive integer, and G is less than or equal to the total number of at least two computing components;
A first sending subunit, configured to send the index re-receiving request to a first computing component corresponding to the minimum waiting queue length if the minimum waiting queue length is less than the waiting queue length threshold; the first computing component belongs to a starting computing component set;
the seventh generating unit is specifically configured to execute, in the first computing component, the first index generating code according to the index regeneration request, and generate a second index associated with the inference result and the second value.
Wherein the affiliated service includes at least two computing components;
a sixth generation unit including:
the third obtaining subunit is used for determining component states corresponding to at least two computing components respectively, obtaining the computing components with the component states being component starting states in the at least two computing components, and generating a starting computing component set from the obtained computing components;
a fourth determining subunit, configured to determine G wait queue lengths of computing elements in the set of starting computing elements, and if an average wait queue length corresponding to the G wait queue lengths exceeds an average wait queue length threshold, start a computing element in which an element state is an element idle state in at least two computing elements; starting a computing component in the computing component set to correspond to a waiting queue length; g is a positive integer, and G is less than or equal to the total number of at least two computing components;
The second sending subunit is used for determining the successfully started computing component as a second computing component and sending an index re-receiving request to the second computing component; updating the component state of the second computing component to a component start state;
the seventh generating unit is specifically configured to execute, in the second computing component, the first index generating code according to the index regeneration request, and generate a second index associated with the inference result and the second value.
Wherein, the data processing device still includes:
a seventh obtaining module, configured to obtain, if the first value includes the second value, a second index corresponding to the second value from the first indexes;
and the step execution module is used for executing the step of utilizing the reasoning result based on the decoupling relation to run the first index generation code and generating the second index related to the reasoning result and the second value if the first value does not comprise the second value.
In one aspect, the application provides a computer device comprising: a processor, a memory, a network interface;
the processor is connected to the memory and the network interface, where the network interface is used to provide a data communication function, the memory is used to store a computer program, and the processor is used to call the computer program to make the computer device execute the method in the embodiment of the present application.
In one aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, the computer program being adapted to be loaded by a processor and to perform a method according to embodiments of the present application.
In one aspect, embodiments of the present application provide a computer program product comprising a computer program stored on a computer readable storage medium; the processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device performs the method in the embodiment of the present application.
In the embodiment of the application, the decoupling relation is formed between the reasoning code used for generating the reasoning result of the first model aiming at the evaluation data set and the first index generation code, so that when the second value used for updating the first value is acquired, the embodiment of the application does not need to rerun the reasoning code, and can directly run the first index generation code through the decoupling relation to generate the second index related to the reasoning result and the second value. As can be seen from the above, by performing decoupling processing on the inference code and the first index generation code, repeated generation of the inference result can be avoided when a new value of the dynamic parameter is obtained, so that waste of computing resources can be reduced; in addition, the index can be rapidly recalculated according to the generated reasoning result, so that the generation efficiency of the second index can be improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
FIG. 2a is a schematic diagram of a scenario for data processing according to an embodiment of the present application;
FIG. 2b is a second schematic diagram of a scenario of data processing according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a data processing method according to an embodiment of the present application;
FIG. 4a is a schematic diagram showing an overall index according to an embodiment of the present application;
FIG. 4b is a diagram illustrating a line graph of an advanced index according to an embodiment of the present application;
FIG. 5 is a second flow chart of a data processing method according to an embodiment of the present application;
FIG. 6 is a third schematic view of a scenario of data processing according to an embodiment of the present application;
FIG. 7a is a flowchart illustrating a data processing method according to an embodiment of the present application;
Fig. 7b is a flowchart of a data processing method according to an embodiment of the present application;
FIG. 8 is a flowchart of a data processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a scenario four of data processing according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For ease of understanding, the related concepts will first be explained.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Computer Vision (CV) is a science of how to "look" at a machine, and more specifically, to replace a camera and a Computer to perform machine Vision such as identifying and measuring a target by human eyes, and further perform graphic processing, so that the Computer is processed into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, optical character recognition (Optical Character Recognition, OCR), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and mapping, autopilot, intelligent transportation, etc., as well as common biometric recognition techniques such as face recognition, fingerprint recognition, etc. In embodiments of the present application, computer vision techniques may be applied to image processing models, including but not limited to image classification models, image recognition models, and image segmentation models.
Natural language processing (Nature Language processing, NLP) is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like. In embodiments of the present application, natural language processing may be applied to language class models including, but not limited to, language translation models, speech to text models.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like. In an embodiment of the application, the first model and the second model are AI models based on machine learning techniques.
The scheme provided by the embodiment of the application relates to artificial intelligence natural language processing, computer vision technology and the like, and is specifically described by the following embodiment.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture according to an embodiment of the application. As shown in fig. 1, the system may include a service server 100 and a cluster of terminal devices. The terminal device cluster may include: the terminal apparatuses 200a, 200b, 200c, …, and 200n, it will be appreciated that the above system may include one or more terminal apparatuses, and the present application is not limited to the number of terminal apparatuses.
Wherein a communication connection may exist between the clusters of terminal devices, for example, a communication connection exists between terminal device 200a and terminal device 200b, and a communication connection exists between terminal device 200a and terminal device 200 c. Meanwhile, any terminal device in the terminal device cluster may have a communication connection with the service server 100, for example, a communication connection exists between the terminal device 200a and the service server 100, where the communication connection is not limited to a connection manner, may be directly or indirectly connected through a wired communication manner, may be directly or indirectly connected through a wireless communication manner, or may also be other manners, and the application is not limited herein.
It should be understood that each terminal device in the cluster of terminal devices shown in fig. 1 may be provided with an application client, which, when running in the respective terminal device, may interact with the service server 100 shown in fig. 1, i.e. the communication connection described above, respectively. The application client can be an application client with an evaluation model function, such as a video application, a social application, an instant messaging application, a navigation application, a music application, a shopping application, an electronic map application, a browser and the like. The application client may be an independent client, or may be an embedded sub-client integrated in a certain client (for example, a social client, a travel client, etc.), which is not limited herein. Taking an electronic map application as an example, the service server 100 may be a collection including a plurality of servers such as a background server and a data processing server corresponding to the electronic map application, so each terminal device may perform data transmission with the service server 100 through an application client corresponding to the electronic map application, for example, each terminal device may send a first model evaluation request for requesting to evaluate a first model to the service server 100 through an application client of the electronic map application, and further the service server 100 may perform evaluation processing on the first model based on the first model evaluation request and return an evaluation result to the terminal device.
It will be appreciated that in the specific embodiment of the present application, related data such as user information (e.g., the first model and the second value) is involved, and when the embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of related data is required to comply with related laws and regulations and standards of related countries and regions.
For the convenience of subsequent understanding and description, the embodiment of the present application may select one terminal device as a target terminal device in the terminal device cluster shown in fig. 1, for example, use the terminal device 200a as a target terminal device. When receiving a first model evaluation request for requesting evaluation of a first model, the terminal device 200a may transmit the first model evaluation request to the service server 100. The embodiment of the application does not limit the model type of the first model, and can be set according to actual application scenes, including but not limited to an image processing type, a voice processing type and a text processing type. The embodiment of the application also does not limit the reasoning scenes of the first model, and can limit the reasoning scenes according to the actual application scenes, including but not limited to image classification scenes, image detection scenes and image segmentation scenes in the image processing types.
Further, after receiving the first model evaluation request sent by the terminal device 200a, the service server 100 may acquire the first model and an evaluation dataset for evaluating the first model according to the first model evaluation request. The embodiment of the application does not limit the acquisition mode of the first model and the acquisition mode of the evaluation data set, and can be set according to the actual application scene. Further, the service server 100 obtains an inference code associated with the model type of the first model, inputs both the first model and the evaluation dataset into the inference code, runs the inference code, and generates an inference result of the first model for the evaluation dataset; further, the service server 100 generates a parameter list associated with the reasoning result, and in the embodiment of the present application, the parameters for evaluating the first model need not be manually set, but are generated by the parameter generating code mentioned below, and for a specific generation process of the parameter list, please refer to the description in the embodiment corresponding to fig. 3 below. Wherein the parameter list comprises parameters with dynamic properties, such as confidence thresholds in the classification scene.
Further, the business server 100 runs the first index generation code associated with the first model using the inference result, and can generate the first index associated with the inference result and the parameter list. In the embodiment of the application, the inference code and the first index generation code are in a decoupling relationship, and in fact, any two codes in the inference code, the parameter generation code, the inference result processing code and the first index generation code in the evaluation code of the embodiment of the application are in a decoupling relationship. It will be appreciated that the inference scenario of the first model belongs to an inference scenario under the model type of the first model, for example, the model type of the first model is an image processing type, the inference scenario under the image processing type may include an image classification scenario, an image detection scenario, an image segmentation scenario, and the inference scenario of the first model may be any of the above.
Subsequently, the service server 100 sends the evaluation result including the first index to the terminal device 200a, and after the terminal device 200a receives the evaluation result sent by the service server 100, the evaluation result may be displayed on a screen corresponding to the terminal device.
Optionally, if the evaluating code (including the reasoning code and the first index generating code in the decoupling relationship) for evaluating the first model is stored locally in the terminal device 200a, the terminal device 200a may acquire the first model and the evaluating dataset when receiving the evaluating instruction for the first model, then run the reasoning code to generate a reasoning result of the first model for the evaluating dataset, and further generate a parameter list associated with the reasoning result; further, the terminal device 200a may run the first index generation code to generate the first index associated with the inference result and the parameter list. The evaluation code for evaluating the first model locally by the terminal device 200a may be generated by the service server 100 or updated and then sent to the terminal device 200 a.
Further, according to the parameter update request for the first value, the service server 100 obtains a second value for updating the first value; it may be understood that, the parameter update request is equivalent to a model evaluation request under a condition that an inference result has been generated, and since the service server 100 has generated an inference result according to the first model evaluation request and the inference code and the first index generation code are in a decoupling relationship, the service server 100 may skip the inference code and directly run the first index generation code to generate the second index associated with the inference result and the second value through the decoupling relationship; the first index and the second index are used for indicating model reasoning capacity of the first model.
Subsequently, the service server 100 transmits the second index to the terminal device 200a, and the terminal device 200a may display the second index on its corresponding screen after receiving the second index transmitted by the service server 100. Similarly, if the evaluating code (including the reasoning code and the first index generating code in the decoupling relationship) for evaluating the first model is stored locally in the terminal device 200a, the terminal device 200a may obtain the second value when receiving the parameter updating instruction for the first value, and then may run the first index generating code to generate the second index associated with the reasoning result and the second value.
It should be noted that, the service server 100, the terminal device 200a, the terminal device 200b, and the terminal device 200c may be a blockchain node in a blockchain network, and the data (for example, the first model and the second value) described in full text may be stored in a manner that the blockchain node generates a block according to the data and adds the block to the blockchain for storage.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like, and is mainly used for sorting data according to time sequence, encrypting the data into an account book, preventing the account book from being tampered and forged, and simultaneously verifying, storing and updating the data. A blockchain is essentially a de-centralized database in which each node stores an identical blockchain, and a blockchain network can distinguish nodes into core nodes, data nodes, and light nodes. The core nodes, data nodes and light nodes together form a blockchain node. The core node is responsible for the consensus of the whole blockchain network, that is to say, the core node is a consensus node in the blockchain network. The process of writing the transaction data in the blockchain network into the ledger may be that a data node or a light node in the blockchain network acquires the transaction data, transfers the transaction data in the blockchain network (that is, the node transfers in a baton manner) until the transaction data is received by a consensus node, packages the transaction data into a block, performs consensus on the block, and writes the transaction data into the ledger after the consensus is completed. Here, the transaction data is exemplified by the first model and the second value, and after the transaction data is commonly recognized, the business server 100 (blockchain node) generates a block according to the transaction data, and stores the block into the blockchain network; for reading transaction data (i.e., the first model and the second value), the blockchain node may acquire a block containing the transaction data in the blockchain network, and further acquire the transaction data in the block.
It will be appreciated that the method provided by the embodiments of the present application may be performed by a computer device, including but not limited to a terminal device or a service server. The service server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing a cloud database, cloud service, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, basic cloud computing service such as big data and an artificial intelligence platform. Terminal devices include, but are not limited to, cell phones, computers, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircraft, and the like. The terminal device and the service server may be directly or indirectly connected through a wired or wireless manner, which is not limited in the embodiment of the present application.
Further, referring to fig. 2a, fig. 2a is a schematic view of a data processing scenario provided in an embodiment of the present application. The embodiment of the application can be applied to the service scenes such as model evaluation scenes, model index viewing scenes, model parameter changing scenes and the like aiming at the model, and specific service scenes are not listed one by one. The implementation process of the data processing scenario may be performed in a service server, or may be performed in a terminal device, or may be performed interactively in the terminal device and the service server, which is not limited herein. For convenience of description and understanding, the embodiments of the present application will be described with reference to an example of interaction between a terminal device and a service server, where the terminal device may be any one of the terminal device clusters in the embodiment corresponding to fig. 1, and the service server may be the service server 100 in the embodiment corresponding to fig. 1.
As shown in fig. 2a, the terminal device 20a sends a first model evaluation request to the service server 100, and the service server 100 may acquire the first model 20b according to the first model evaluation request and determine a model type and an inference scenario of the first model 20b. It should be emphasized that the embodiment of the present application does not limit the model type and the inference scenario of the first model 20b, and the model that needs to be evaluated in practical application may be used as the first model 20b. For ease of description and understanding, fig. 2a illustrates the model type of the first model 20b as an image processing type, and the inference scenario of the first model 20b as a general detection scenario. Further, the service server 100 obtains an evaluation dataset 20c for evaluating the first model 20b, wherein the evaluation dataset 20c may be specified by the terminal device 20a, or may be obtained according to the model type and the inference scenario of the first model 20b.
Further, the business server 100 obtains an inference code 201d associated with the model type of the first model 20b (the image processing type as exemplified in fig. 2 a), and in fig. 2a the inference code 201d may be a general inference code for the image processing type. The business server 100 runs the inference code 201d, generates an inference result of the first model 20b for the evaluation dataset 20c, and as illustrated in fig. 2a, the inference result 201e may include two kinds of labels of apples and oranges, an apple class intersection ratio (Intersection over Union, IOU) of 0.8, an apple class confidence of 0.7, an orange class intersection ratio of 0.7, and an orange class confidence of 0.8.
Further, the service server 100 obtains the first parameter generation code 202d associated with the inference scenario of the first model 20b (e.g., the general detection scenario illustrated in fig. 2 a), runs the first parameter generation code 202d, and the service server 100 may generate a parameter list associated with the inference result 201 e. As illustrated in fig. 2a, the parameter list 202e may include two types of labels, namely apples and oranges, and may include two parameters, namely an overlap threshold and a confidence threshold, wherein the overlap threshold of apples is 0.8, the confidence threshold of apples is 0.8, the overlap threshold of oranges is 0.9, and the confidence threshold of oranges is 0.9. It should be understood that the reasoning result 201e and the parameter list 202e in fig. 2a are examples for convenience of description and understanding of the embodiment of the present application, and are not representative of the reasoning result and the parameter list in the actual application scenario.
It will be understood that the confidence threshold and the cross-over threshold are both variable, so that the model evaluator corresponding to the first model 20b can update the four parameters (i.e. the cross-over threshold of apple class, the confidence threshold of apple class, the cross-over threshold of orange class, and the confidence threshold of orange class, respectively, 0.8, 0.9, and 0.9) in the parameter list 202 e. It should be emphasized that the present example refers to the cross-ratio threshold as a parameter, and the cross-ratio threshold carrying the class label as a parameter, for example, the cross-ratio threshold of apple class of 0.8 as a parameter, and similarly, the confidence threshold is also understood.
Further, the business server 100 obtains the first index generation code 203d associated with the inference scenario (e.g., the general detection scenario illustrated in fig. 2 a) of the first model 20b, runs the first index generation code 203d, and the business server 100 may generate the first index associated with the inference result 201e and the parameter list 202 e. In the embodiment of the present application, any two codes among the inference code 201d, the first parameter generation code 202d, and the first index generation code 203d are in a decoupling relationship. The service server 100 returns the first index to the terminal device 200a, where the manner in which the terminal device 200a displays the first index is described in step S105 in the embodiment corresponding to fig. 3 below, and the description is not expanded here.
If the model evaluator corresponding to the first model 20b wants to better understand the model reasoning capability of the first model 20b, the dynamic parameter in the parameter list 202e can be updated, and fig. 2a and fig. 2b are combined, and fig. 2b is a schematic diagram of a data processing scenario provided in the embodiment of the present application. As shown in fig. 2b, the model evaluator corresponding to the first model 20b sends a parameter update request for the first value to the service server 100 through the terminal device 20a, so the service server 100 can obtain a second value for updating the first value according to the parameter update request. For ease of description and understanding, fig. 2b illustrates the first parameter with an orange class confidence threshold of 0.8 in the parameter list 202e, and the second value 20f is illustrated as an orange class confidence threshold of 0.9. It can be understood that the specific content of the first parameter and the second value may be set according to the requirements of the model evaluator in the actual application scenario, which is not limited in the embodiment of the present application.
Since the inference code 201d and the first index generation code 203d in the embodiment of the present application are in a decoupling relationship, and the service server 100 has generated the inference result 201e of the first model 20b for the evaluation data set 20c, after obtaining the second value 20f, the service server 100 may directly input the second value 20f and the inference result 201e to the first index generation code 203d, as shown in fig. 2b, and run the first index generation code including the second value 20f and the inference result 201e, and then the service server 100 may generate the second index and return the second index to the terminal device 20a. It will be appreciated that the model evaluator may be more informed of the model reasoning capabilities of the first model 20b by multiple sets of parameter-indicators.
As can be seen from the above, by performing decoupling processing on the inference code and the first index generation code, the embodiment of the present application can avoid repeatedly generating the inference result when obtaining a new value of the dynamic parameter, so that the waste of computing resources can be reduced; in addition, the index can be rapidly recalculated according to the generated reasoning result, so that the generation efficiency of the second index can be improved.
Further, referring to fig. 3, fig. 3 is a flowchart illustrating a data processing method according to an embodiment of the application. The embodiment of the application can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent transportation, auxiliary driving and the like. The data processing method may be performed by a service server (e.g., the service server 100 shown in fig. 1 described above), or may be performed by a terminal device (e.g., the terminal device 200a shown in fig. 1 described above), or may be performed interactively by the service server and the terminal device. For easy understanding, the embodiment of the present application is described as an example in which the method is executed by a service server. As shown in fig. 3, the data processing method may include at least the following steps S101 to S105.
Step S101, a first model evaluation request for requesting to evaluate a first model is obtained, the first model is obtained according to the first model evaluation request, and an evaluation dataset for evaluating the first model is obtained.
Specifically, the embodiment of the application does not limit the model type and the reasoning scene of the first model, and can be any type of model and any reasoning scene. Among other things, inference scenes include, but are not limited to, image classification scenes, image detection scenes, image segmentation scenes in image processing types. The evaluation data set is used for evaluating the first model, so that the embodiment of the application does not limit the content of the evaluation data set and determines according to the model which is evaluated according to actual needs. The evaluation dataset comprises an evaluation sample set, e.g. an image sample set for an image processing model, a speech sample set for a speech processing model, a text sample set for a text processing model; the evaluation data set also comprises an evaluation label set, namely a labeling value corresponding to the evaluation sample set, namely a correct result.
Step S102, running an inference code associated with the model type of the first model, generating an inference result of the first model aiming at the evaluation data set, and generating a parameter list associated with the inference result; the parameter list includes a first value of the dynamic parameter.
Specifically, a first parameter generation code associated with an inference scene of a first model is acquired; if the first model evaluation request does not carry the newly added parameter name, the reasoning result and the evaluation data set are used as first input parameters of a first parameter generation code, and the first parameter generation code containing the first input parameters is operated to generate a parameter list; if the first model evaluation request carries the newly added parameter name, adding the newly added parameter name into the first parameter generation code to obtain a second parameter generation code aiming at the first model; and taking the reasoning result and the evaluation data set as second input parameters of the second parameter generation code, and running the second parameter generation code containing the second input parameters to generate a parameter list.
The complete set of evaluation codes may include an inference code for generating an inference result, a parameter generation code for generating a parameter list, an inference result processing code for processing the inference result to generate attribute information and an evaluation conclusion, and an index generation code for generating an index. In the embodiment of the application, any two codes in the whole set of evaluation codes are in a decoupling relationship.
When evaluating the first model, the service server firstly determines the model type and the inference scene of the first model, for example, if the model type of the first model is an image processing type, the service server acquires a general inference code corresponding to the image processing type, takes the first model and the evaluation dataset as input parameters of the inference code, and operates the inference code containing the input parameters to obtain an inference result of the first model aiming at the evaluation dataset.
It can be appreciated that different inference scenarios may exist for the same parameters, for example, a generic detection scenario includes an IOU threshold and a confidence threshold, and a helmet detection scenario also includes an IOU threshold and a confidence threshold, so in the development stage, the embodiment of the present application may generate a first parameter generation code for a model type. If the first model evaluation request does not carry the newly added parameter name (which can be understood as a parameter item), the service server directly operates a first parameter generation code associated with the reasoning scene of the first model to generate a parameter list. If the first model evaluation request carries a new parameter name, for example, the reasoning scene of the first model is a helmet detection scene, the model evaluator hopes to newly increase the evaluation parameter "helmet-human head coverage threshold" so as to adjust the judgment threshold of the result that whether the helmet is correctly worn, at this time, before the first model is evaluated, the service server needs to newly increase the parameter "helmet-human head coverage threshold" in the first parameter generation code to obtain a second parameter generation code for the first model, and the subsequent process is consistent with the above, so that redundant description is not performed, but the general IOU threshold and confidence threshold can be generated at this time, and the "helmet-human head coverage threshold" can also be generated.
The parameter generating code (including the first parameter generating code) in the embodiment of the application defines how to generate the parameter list according to the reasoning result, for example, the parameter generating code of the general detection scene reads the category supported by the model and the category of the data set label, takes the intersection of the category and the category, and then generates a confidence threshold and a IoU threshold for each category. I.e. the kind of parameter (i.e. the parameter item) in the parameter list is related to the result of the inference, the value may be default, alternatively the value of the parameter may be related to the result of the inference, i.e. not default.
The embodiment of the application mainly describes dynamic parameters, wherein the dynamic parameters are parameters which are used for being presented at the front end and influence the calculation of dynamic indexes after being changed. If the dynamic index is needed, generating codes of dynamic parameters are needed to be provided, subclasses for realizing corresponding base classes are provided in the codes, and corresponding functions are input into reasoning results and meta-information (including model information, data set label information and the like) of reasoning and output into a list of the dynamic parameters.
Step S103, using the reasoning result, running a first index generation code associated with the first model to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation; the inference scene of the first model belongs to the inference scene under the model type of the first model.
Specifically, a first index generation code associated with the first model is generated; taking the reasoning result and the parameter list as a third input parameter of the first index generation code, and running the first index generation code containing the third input parameter to generate a prediction result; and determining a result error between the predicted result and a correct result in the evaluation data set, and determining a first index according to the result error.
Wherein the specific process of generating the first metric generation code associated with the first model may include: acquiring an initial index generation code associated with an inference scene of the first model from the index generation code set; if the first model evaluation request does not carry the new index name, determining the initial index generation code as a first index generation code; if the first model evaluation request carries the new index name, generating a new index generation code aiming at the new index name, and determining the new index generation code and the initial index generation code as the first index generation code.
Step S103 may further include: acquiring an index integral display type, and generating an integral index corresponding to the first index through the index integral display type; the overall index comprises an index name corresponding to the first index and an index value corresponding to the first index; acquiring an index advanced display type, and generating an advanced index corresponding to the first index through the index advanced display type; the advanced index is used for indicating the index displayed in a chart mode, and comprises index names corresponding to each evaluation type in the evaluation data set and index values corresponding to each evaluation type; the index name corresponding to each evaluation type and the index value corresponding to each evaluation type belong to the first index; and carrying out association storage on the integral index and the advanced index.
The indexes (including the first index and the second index described below) in the embodiment of the present application may include a static index and a dynamic index, where the static index refers to an index that no longer supports modification after model evaluation is completed, for example, the performance of the model is not modified along with the interaction of the model evaluator during presentation, or the model evaluator only needs to care about the index under a certain confidence/threshold. The dynamic index refers to an index that a model evaluator changes a dynamic parameter and correspondingly changes a value, for example, the model evaluator modifies a confidence threshold of a certain category, thereby affecting the overall recall. Therefore, the embodiment of the application generates the code segments of the functions of the static index and the dynamic index and realizes the corresponding base class.
It will be appreciated that different inferential scenarios may exist for the same index, for example a generic detection scenario may include Recall and Precision-Recall (PR) curves, and a helmet detection scenario may also include Recall. Therefore, the embodiment of the application can generate the initial index generation code aiming at the model type in the development stage. If the first model evaluation request does not carry the new index name, the service server directly operates an initial index generation code associated with the reasoning scene of the first model to generate a first index. If the first model evaluation request carries a new index name, for example, the reasoning scene of the first model is a safety helmet detection scene, the model evaluator hopes to newly increase the index of the 'model reliability' for feeding back the omission factor of the safety helmet, at this time, before the first model is evaluated, the service server needs to generate an index calculation code aiming at the 'model reliability', the index calculation code aiming at the 'model reliability' and an initial index generation code are determined as a first index generation code, and the follow-up process is consistent with the above, so that repeated description is omitted. The embodiment of the application does not limit the total number of the first index generating codes, can be set according to the actual application scene, and one first index generating code is used for generating one index, so the embodiment of the application does not limit the total number of the first indexes and only needs to correspond to the total number of the first index generating codes.
The embodiment of the application normalizes the index into the numerical value, the graph and the table, the front end only needs to adapt to the three conditions, and the new scene access front end does not need to have workload at all. Based on this, the indexes in the embodiment of the present application can be divided into two major types, namely, an overall index and a further index, wherein the overall index refers to an index with only index names and corresponding numerical values. For convenience of description and understanding, it is assumed that the first model is a target detection model, and please refer to fig. 4a, fig. 4a is a schematic diagram showing an overall index according to an embodiment of the present application. As illustrated in fig. 4a, the first index may be expressed as 3 overall indexes, which are 32.54% average precision (mean Average Precision, abbreviated as mAP), 43.67% recall, and 35.82% precision, respectively.
One overall indicator may have "different indicator values under different conditions", for example, the model evaluator may want to know that the "recall" indicator is IoU =0.55, iou=0.75, iou=0.95, and IoU =0.55:0.95. Thus, each overall indicator (including both static and dynamic indicators) may include a plurality of labels (Label) to indicate the value of the indicator under different conditions. For a plurality of Label, the front end (such as a display screen corresponding to the terminal equipment) displays the index value of the first Label by default, and the model evaluator can switch the Label (namely, the replacement parameter) to check the value of the index under the Label.
The advanced index refers to an index shown as a graph, for example, an index shown by a table, a line graph, a scatter diagram, a histogram, a pie chart, or the like. The advanced index can indicate the specific type of the index and all data of the corresponding type, and the front end renders according to the type and the data of each advanced index. For convenience of description and understanding, please refer to table 1 together with the first model as the target detection model, table 1 is a table showing schematic representation of a further index according to an embodiment of the present application.
TABLE 1
Label (Label) Cross-over ratio threshold Confidence threshold Accuracy rate of Recall rate of recall Average accuracy
Apple tree 0.8 0.8 0.8 0.7 0.8
Orange fruit 0.9 0.9 0.7 0.8 0.7
The first column in table 1 shows that the first model includes both apple and orange type tags; the second three columns show that the parameter list comprises four parameters, namely an apple cross ratio threshold value with a value of 0.8, an apple confidence threshold value with a value of 0.8, an orange cross ratio threshold value with a value of 0.9 and an apple confidence threshold value with a value of 0.9; the fourth column-sixth column shows the respective precision, recall and average precision (Average Precision, AP) values for apples, i.e., 0.8, 0.7 and 0.8, and the respective precision, recall and average precision values for apples, i.e., 0.7, 0.8 and 0.7. In comparison with the above-described fig. 4a, table 1 makes it clear that the index value corresponding to each category corresponds to each category. Further, referring to fig. 4b, fig. 4b is a schematic diagram illustrating a line drawing of a step index according to an embodiment of the present application. As illustrated in fig. 4b, the front end may show the relationship between the accuracy of the apple type and the recall, the relationship between the accuracy of the orange type and the recall, and the relationship between the accuracy of the whole and the recall through a line graph.
The decoupling relation is between the reasoning codes used for generating the reasoning results and the first index generation codes, so that when the second value used for updating the first value is acquired, the embodiment of the application does not need to rerun the reasoning codes, and can directly run the first index generation codes through the decoupling relation to generate the second index related to the reasoning results and the second value. If the inference code and the first index generation code are in a coupling relation, when the second value is obtained, a whole set of evaluation codes (comprising the inference code and the first index generation code) need to be operated, namely the inference code is operated repeatedly, so that the same inference result is obtained, the operation resources are wasted, and the index generation efficiency is reduced.
Step S104, according to the parameter updating request aiming at the first value, acquiring a second value for updating the first value.
Specifically, if the first value includes the second value, acquiring a second index corresponding to the second value from the first index; and if the first value does not comprise the second value, executing a step of utilizing the reasoning result based on the decoupling relation, running the first index generation code and generating a second index related to the reasoning result and the second value.
The model evaluator can see the first index and the parameter list through the front end, and after seeing the first index and the parameter list, if certain parameters are wanted to be adjusted, and evaluation indexes under new parameters are wanted to be seen, for example, in a detection scene, the model evaluator can have the requirements of adjusting a IoU threshold value and a classification confidence threshold value.
According to the embodiment of the application, indexes corresponding to common parameters, such as detection scenes, can be calculated in advance, and a model evaluator usually cares about that a model is in IoU threshold: 0.5, 0.75, 0.95, and all results from 0.5 to 0.95 in steps of 0.05, a plurality of mutually different parameter lists are generated in step S102, for example, the IoU threshold takes 0.5 in the first parameter list and the IoU threshold takes 0.75 in the second parameter list. Therefore, in step S103, there may be a plurality of first indexes, for example, the first index 1 is generated when the IoU threshold is 0.5, and the first index 2 is generated when the IoU threshold is 0.75. Therefore, the parameter generating code can be internally provided with a common value, the index is calculated when the evaluating task is operated, the index is simultaneously transmitted to the front end when the model evaluator looks up the index, and the model evaluator can directly switch under the common parameter value without calculating the index again.
As can be seen from the above, when the model evaluator adjusts the parameters, the service server may determine whether the adjusted parameters are generated, if so, return the index corresponding to the parameters to the front end, and if not, execute the following step S105.
Step S105, based on the decoupling relation, using the reasoning result to run the first index generation code to generate a second index related to the reasoning result and the second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
Specifically, the number of the first index generation codes is A, and A is a positive integer; the A first index generating codes comprise first index generating codes B c The method comprises the steps of carrying out a first treatment on the surface of the c is a positive integer, and c is less than or equal to A; the number of the first indexes is A; the A first indexes comprise running first index generating codes B c The first index E is generated c The method comprises the steps of carrying out a first treatment on the surface of the Acquiring a first index generation code B c Influence parameter name list D c The method comprises the steps of carrying out a first treatment on the surface of the Influence parameter name list D c Includes means for generating a first index E c Parameter names of (2); will influence parameter name list D c The parameter name is matched with the second value; if influence parameter name list D c The parameter names matched with the second value exist in the included parameter names, and the reasoning result and the second value first index are used for generating a code B through a decoupling relation c As a fifth input parameterThe method comprises the steps of carrying out a first treatment on the surface of the Running a first index generating code B containing a fifth input parameter c Generating a prediction result F c For the predicted result F c And comparing the correct results in the evaluation data set to obtain a second index.
Because the inference code for generating the inference result and the index generation code for generating the index are in a decoupling relationship, and the inference result has been generated in step S102, when the second value is obtained and the index is recalculated, the service server may not perform the inference process, and directly operate the first index generation code according to the generated inference result and the second value, so as to obtain the second index.
Furthermore, not all parameters are changed, the service server needs to recalculate all indexes, for example, a model evaluator changes a confidence threshold value and does not affect the accuracy of a detection frame, so that the embodiment of the application can define an index generation code, add an influence parameter list, and only when parameters in the corresponding influence parameter list are changed, the index needs to be recalculated, and the list can be dynamically generated.
In the embodiment of the application, the decoupling relation is formed between the reasoning code used for generating the reasoning result of the first model aiming at the evaluation data set and the first index generation code, so that when the second value used for updating the first value is acquired, the embodiment of the application does not need to rerun the reasoning code, and can directly run the first index generation code through the decoupling relation to generate the second index related to the reasoning result and the second value. As can be seen from the above, by performing decoupling processing on the inference code and the first index generation code, repeated generation of the inference result can be avoided when a new value of the dynamic parameter is obtained, so that waste of computing resources can be reduced; in addition, the index can be rapidly recalculated according to the generated reasoning result, so that the generation efficiency of the second index can be improved.
Referring to fig. 5, fig. 5 is a second flowchart of a data processing method according to an embodiment of the present application. The method may be performed by a service server (e.g., the service server 100 shown in fig. 1 and described above), by a terminal device (e.g., the terminal device 200a shown in fig. 1 and described above), or by both the service server and the terminal device. For easy understanding, the embodiment of the present application is described as an example in which the method is executed by a service server. As shown in fig. 5, the method may include at least the following steps.
Step S201, a first model evaluation request for requesting to evaluate the first model is acquired, the first model is acquired according to the first model evaluation request, and an evaluation dataset for evaluating the first model is acquired.
In the specific implementation process of step S201, please refer to step S101 in the embodiment corresponding to fig. 3, and details are not described here.
Step S202, running an inference code associated with the model type of the first model, generating an inference result of the first model aiming at the evaluation data set, and generating a parameter list associated with the inference result; the parameter list includes a first value of the dynamic parameter.
The embodiment of the application can run the evaluation task in a container cluster management system (Kubernetes, K8s for short). After the service server mounts the first model, the inference algorithm and the evaluation data set and starts the computing component (hereinafter also referred to as Pod), the service server runs the inference code (which may also be understood as the inference algorithm) to obtain an output of the evaluation data set on the first model, that is, an inference result.
Further, according to the reasoning result, the service server runs the parameter generation code to generate a parameter list, wherein the parameter list comprises a parameter name, a parameter default value, a parameter value range and a parameter description.
Step S203, obtaining an inference result processing code associated with the inference scene of the first model, and taking the inference result and the parameter list as a fourth input parameter of the inference result processing code.
And step S204, running an inference result processing code containing the fourth input parameter, and generating attribute information corresponding to the inference result and an inference conclusion corresponding to the inference result.
Specifically, as described in connection with step S203 and step S204, according to the inference result and the parameter list, the service server may run an inference result processing code to assign attribute information and an inference conclusion to the inference result, where in the helmet detection scenario, the attribute information (may be understood as Tag) may be a correct result of the helmet in the picture, that is, a group score (GT) or a detected number, a missed number, a false number, or a detected helmet color of the picture of the helmet, and mark an inference conclusion of the prediction result (that is, a prediction value prediction), that is, whether the prediction result is a bad case.
The inference results of the evaluation are displayed and filtered, distributed and independent indexes are used, a search function and a custom numerical comparison function are added, the query time is optimized, and the storage cost is optimized, and the description of the following step S207 is particularly referred to.
Step S205, using the reasoning result, running a first index generation code associated with the first model to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation; the inference scene of the first model belongs to the inference scene under the model type of the first model.
Specifically, according to the reasoning result and the parameter list, the service server operates each index generation code to generate a corresponding index. For the specific implementation process of step S205, please refer to step S103 in the embodiment corresponding to fig. 3, which is not described herein.
Step S206, determining the reasoning conclusion, the first index and the prediction result as evaluation results; the prediction result is generated based on the inference result and the parameter list.
Step S207, establishing an index relation between the attribute information and the evaluation result, and storing the attribute information and the evaluation result with the index relation in an evaluation task file corresponding to the first model; the evaluation task file is used for returning a target evaluation result associated with key query information carried by the evaluation result query request in the evaluation result when the evaluation result query request aiming at the first model is obtained.
Specifically, acquiring an evaluation result query request aiming at a first model; the evaluating result query request carries key query information; according to the evaluating result query request, matching the key query information with the attribute information; and if the target attribute information in the attribute information is matched with the key query information, acquiring a target evaluation result with an index relation with the target attribute information from the evaluation results.
The results of the individual evaluation tasks are analyzed, at most, rewritten during the re-index calculation, and in most cases the data is read-only. And the results of different evaluation tasks are not mutually influenced. Therefore, in the embodiment of the application, all the results of each task are stored as the file of a database, the file is used for reading every time the results of the task are accessed, and the index is read according to the index, and the index is read every time the index is recalculated. That is, one task corresponds to one file, and the tasks are not mutually influenced, so that the management is convenient. Thus, the problem that the performance is reduced due to more tasks is avoided, the problem that the performance of the current database is affected when the tasks are deleted is also avoided, and other file systems, such as Hadoop Distributed File System, HDFS for short, which is a distributed file system, can be used for task results. This allows the query time to be optimized to about 200 milliseconds and the storage cost to be reduced 3-5 times compared to centralized database storage.
After the evaluation is finished, the model evaluator hopes to check all reasoning results, carry out Bad Case screening and other operations, and hopes to have different screening operations on different scenes, such as classification information of classification scenes, word errors of voice-to-word scenes, number of pause errors of word-to-voice scenes, detection frame coverage rate of industrial scenes and the like, and general reasoning time consumption.
The embodiment of the application supports that each reasoning result can contain a plurality of tags (namely attribute information), and each Tag has a Tag name and a Tag value. The Tag information is stored in association with the reasoning result, so that all results can be read each time, and screening operation can be provided. The service server can index the Tag, so that the service server only needs to acquire the result focused under the current condition when screening, filtering and paging are performed.
The application gives the screenable conditions to the algorithm developer, and completes the process by adding Tag to the reasoning result; assuming that the first model is a safety helmet detection model, when a model evaluator browses results, all tags can be screened according to the modes of character string matching (choosing), character string searching and digital comparison. For example: pictures with a false detection number greater than 1, pictures with a missing detection number equal to 0, and pictures with a yellow helmet color GT but a red prediction result. Referring to fig. 6, fig. 6 is a schematic diagram of a third scenario of data processing according to an embodiment of the present application. As shown in fig. 6, the model evaluator 601a views the evaluation result through the terminal device 60a, where the terminal device 60a may display the information screening page 60b, and if the model evaluator 601a chooses to view that the correct result is a cat and the predicted result is an evaluation result of a dog, the terminal device 60a may display the evaluation result 601b, where the evaluation result 601b may include a picture of that one object is a cat and a box cake, and screening information (belonging to attribute information), that is, that the correct result is a cat and the predicted result is a dog. The embodiment of the application can change screening information, as shown in fig. 6, the information screening page 60b can be updated into an information screening page 60c, which can include the selection of the number of false detection examples and the selection of the number of missed detection examples of dogs. The model evaluator 601a may input digital text itself or may select digital text. If the model evaluator 601a sets the number of false detection examples of people to 1 and the number of missed detection examples of dogs to be less than 5, the terminal device 60a may display an evaluation result 601c, where the evaluation result 601c may include a picture that one object is one dog and one person, and matching information (belonging to attribute information), that is, the number of false detection examples of people is 1 and the number of missed detection examples of dogs is 1.
It will be appreciated that the interfaces and controls illustrated in fig. 6 are merely some representations that may be referred to, and in an actual business scenario, a developer may perform related design according to product requirements, and the embodiments of the present application are not limited to the specific forms of interfaces and controls involved.
The presentation of the task results (index and reasoning results) can be not only about one task, but also the comparison of the results of a plurality of tasks, so that the effects of a plurality of models are evaluated.
The above-mentioned method can be used for abstracting the screening conditions required by the evaluation result display into tags, adding indexes to the tags, and realizing a distributed and task independent indexing mechanism, so that the self-defined screening conditions are supported, and the time for filtering and inquiring each task is optimized to about 200 milliseconds.
In addition, the information storage scheme such as Tag of the reasoning result may be not only a relational database, but also a non-relational database, a file, cloud storage, a blockchain, and the like. The inference result screening not only supports the screening based on the Tag, but also supports the screening of the inference result and the label result, and also supports the self-defined ordering condition based on the Tag value or the inference result and the label result.
The above steps S201 to S207 may be referred to together with fig. 7a, and fig. 7a is a schematic flow chart of a data processing method according to an embodiment of the present application. As shown in fig. 7a, the data processing method includes the following steps, step S2011, the business server runs an inference code; step S2012, the business server stores the reasoning result; step S2013, the operation parameters of the service server generate codes; step S2014, the business server stores a parameter list; step S2015, the business server runs the reasoning result processing code; step S2016, the business server adds attribute information and reasoning conclusion; step S2017, the service server operates the index generation code; in step S2018, the service server stores the first index.
Step S208, according to the parameter updating request for the first value, a second value for updating the first value is obtained.
Specifically, for example, the first model is a helmet detection model, the parameter list includes a helmet-human head coverage threshold, and if the model evaluator wants to enlarge the helmet-human head coverage threshold, the first model is prevented from detecting a helmet hung on the neck of a person as a correctly worn helmet.
Step S209, using the reasoning result based on the decoupling relation, running a first index generation code to generate a second index associated with the reasoning result and a second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
Specifically, matching the second valued parameter name with the reasoning result name of the reasoning result through the decoupling relation; obtaining a local reasoning result name matched with the second valued parameter name from the reasoning result names, and obtaining a local reasoning result indicated by the local reasoning result name from the reasoning result; and using the local reasoning result and the second value as a sixth input parameter of the first index generation code, and running the first index generation code containing the sixth input parameter to generate a second index.
FIG. 3 above describes that not all parameters change, the business server needs to recalculate all metrics, and this step describes that not all parameters change, and all inference results need to be read, e.g., the model evaluator changes the confidence threshold for a class, which does not affect the inference results that do not include that class. In addition, the embodiment of the application can set the concerned result as Tag and store the Tag into a database; when the parameter change only affects a part of reasoning results, the database can be directly used for filtering and updating. The indexed operation is tested 5-100 times faster than scanning all inference results.
The service server determines a second value, loads an reasoning result of the corresponding task, gives Tag to the reasoning result again and judges a bad case; further, the corresponding index generating codes are loaded, and the index to be updated is recalculated according to the loaded reasoning result. Fig. 7b is a schematic flow chart of a data processing method according to an embodiment of the application. As shown in fig. 7b, the data processing method may include the following steps. Step S2091, the service server determines a second value for updating the first value; step 2092, the service server counts the queue length of the resident computing component, and allocates an index regeneration request; step S2093, the business server runs the reasoning result processing code according to the second value and the reasoning result; because the embodiment of the application decouples the reasoning codes and the index generation codes, the business server can directly regenerate the index after changing the parameters each time by saving the reasoning results; step 2094, the service server adds new attribute information and new reasoning conclusions; step 2095, the service server runs the index generating code of the index to be changed, reads the reasoning result or queries the database to obtain new attribute information; step S2096, the service server stores the second index; it can be understood that the method accelerates through the index by avoiding re-reasoning, avoiding starting pod and scanning all reasoning results as much as possible, reduces the time consumption of recalculating the index from the small hour level to the second level, and can avoid recalculation to a certain extent; this makes it possible to recalculate the metrics after the evaluation is completed based on the requirements of the model evaluator when interacting. With the adoption of the method, the recalculation index speed is improved by about 50 times.
In the embodiment of the application, the decoupling relation is formed between the reasoning code used for generating the reasoning result of the first model aiming at the evaluation data set and the first index generation code, so that when the second value used for updating the first value is acquired, the embodiment of the application does not need to rerun the reasoning code, and can directly run the first index generation code through the decoupling relation to generate the second index related to the reasoning result and the second value. As can be seen from the above, by performing decoupling processing on the inference code and the first index generation code, repeated generation of the inference result can be avoided when a new value of the dynamic parameter is obtained, so that waste of computing resources can be reduced; in addition, the index can be rapidly recalculated according to the generated reasoning result, so that the generation efficiency of the second index can be improved.
Further, referring to fig. 8, fig. 8 is a flowchart of a data processing method according to an embodiment of the present application. The data processing method may be performed by a service server (e.g., the service server 100 shown in fig. 1 described above), or may be performed by a terminal device (e.g., the terminal device 200a shown in fig. 1 described above), or may be performed interactively by the service server and the terminal device. For easy understanding, the embodiment of the present application is described as an example in which the method is executed by a service server. As shown in fig. 8, the data processing method may include at least the following steps S301 to S309.
Step S301, a first model evaluation request for requesting to evaluate a first model is acquired, the first model is acquired according to the first model evaluation request, and an evaluation dataset for evaluating the first model is acquired.
Step S302, running an inference code associated with the model type of the first model, generating an inference result of the first model aiming at the evaluation data set, and generating a parameter list associated with the inference result; the parameter list includes a first value of the dynamic parameter.
Step S303, using the reasoning result, running a first index generation code associated with the first model to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation; the inference scene of the first model belongs to the inference scene under the model type of the first model.
Step S304, according to the parameter updating request for the first value, a second value for updating the first value is obtained.
In the specific implementation process of step S302 to step S304, please refer to step S102 to step S104 in the embodiment corresponding to fig. 3, which is not described herein.
Step S305, running a first index generation code based on the decoupling relation by utilizing the reasoning result, and generating a second index related to the reasoning result and the second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
Specifically, in the main service, responding to a parameter updating request, and determining a first index generating code and an reasoning result through a decoupling relation; generating an index regeneration request according to the second value, the first index generation code and the reasoning result, and sending the index regeneration request to the affiliated service; in the affiliated service, according to the index regeneration request, a first index generation code is operated to generate a second index associated with the reasoning result and the second value.
Wherein the affiliated service includes at least two computing components; the specific process of sending the index regeneration request to the affiliated service may include: determining component states corresponding to at least two computing components respectively, acquiring the computing components with the component states being component starting states in the at least two computing components, and generating a starting computing component set from the acquired computing components; determining G waiting queue lengths of computing components in a starting computing component set, and acquiring the minimum waiting queue length from the G waiting queue lengths; starting a computing component in the computing component set to correspond to a waiting queue length; g is a positive integer, and G is less than or equal to the total number of at least two computing components; if the minimum waiting queue length is smaller than the waiting queue length threshold, sending an index re-receiving request to a first computing component corresponding to the minimum waiting queue length; the first computing component belongs to a starting computing component set; in the affiliated service, according to the index regeneration request, the specific process of running the first index generation code and generating the second index associated with the reasoning result and the second value may include: in the first computing component, a first index generation code is run to generate a second index associated with the inference result and the second value according to the index regeneration request.
Wherein the affiliated service includes at least two computing components; the specific process of sending the index regeneration request to the affiliated service may include: determining component states corresponding to at least two computing components respectively, acquiring the computing components with the component states being component starting states in the at least two computing components, and generating a starting computing component set from the acquired computing components; determining G wait queue lengths of the computing components in the starting computing component set, and if the average wait queue length corresponding to the G wait queue lengths exceeds an average wait queue length threshold value, starting a computing component with a component idle state in at least two computing components; starting a computing component in the computing component set to correspond to a waiting queue length; g is a positive integer, and G is less than or equal to the total number of at least two computing components; determining the successfully started computing component as a second computing component, and sending an index re-receiving request to the second computing component; updating the component state of the second computing component to a component start state; in the affiliated service, according to the index regeneration request, the specific process of running the first index generation code and generating the second index associated with the reasoning result and the second value may include: in the second computing component, the first index generation code is run to generate a second index associated with the inference result and the second value according to the index regeneration request.
The evaluation task of the application can be completed by starting a pos (a task type, responsible for batch processing of short-lived one-off tasks) of k8 s. Because the embodiment of the application does not need an inference process, the index calculation becomes a task of a pure central processing unit (central processing unit, CPU for short), so that only one service is needed to be resident and is specially used for the index calculation. It can be understood that the recalculation of the index is completed by using a unified service, so that the problem that the pod for recalculating the index cannot be started due to insufficient resources is avoided, and the starting time of each pod is saved. The method is realized by the following steps.
The first step: the decoupling service recalculates the request for metrics and the metrics themselves.
The service (also called the main service) responding to the network request needs to meet the response timeliness, and because the database files of the read index and the loading task reasoning information are involved, the memory space of the database files should not be limited, otherwise, the service is not available or the normal response of the service is affected.
The embodiment of the application firstly enables the module for realizing the recalculation to be independent (also called as auxiliary service), independently starts the pod, only receives the request from the main service, only completes the designated task path and performs the work of recalculating the index. This can be achieved in that: the memory space of the heap is limited by the auxiliary service main process, so that the memory occupation of the single recalculation index process can be limited; by separating the main service from the auxiliary service, the influence of the index calculation code (i.e. the index generation code) on the main service is avoided, and the unavailability of the main service due to security problems and the like is avoided.
And a second step of: the number of affiliated service pod is weighted against the single pod concurrency.
Assuming that the service server allocates 4GB of memory for the affiliated service (similar discussion of CPU resources, not repeated), the index recalculation process for each task is assumed to be limited to 500MB of memory. At this time, the service server can directly start a pod to allocate 4GB of memory, so that the pod can support the task of simultaneously performing 8 index recalculations, and the management is very simple. However, if a large number of requests are suddenly flushed, the queued queue may be very long, possibly resulting in a pod Memory overflow (OOM for short); and if there is a problem with the task causing the pod to exit, the service is directly unavailable.
Now, assuming that 2 pod are started and limited to 2GB of memory, the settable concurrency of the pod is 4, namely, at most 4 tasks on the pod can be run simultaneously; similarly, the traffic server may start 4 pods, each limited to 1GB of memory, each of which may be set with a concurrency of 2. The fewer the pod, the simpler the management, but the higher the probability that the service is not available; the more pods the opposite is true, and there are also situations where multiple instances of the service itself occupy more memory.
Therefore, the embodiment of the application always keeps the number of the pod to be 2 or more, and at least avoids the situation that the service is not available. However, there are a plurality of pod, which may result in some memory waste, for example, there may be no task of recalculating the index for a long time, and the memory waste may be avoided by adopting the following third step.
And a third step of: the pod is dynamically started.
In order to avoid performance waste caused by continuous idle (standby) of a plurality of pod, the embodiment of the application can set a waiting queue length for each existing pod, and when the average waiting queue length of all existing pods exceeds a certain threshold, the capacity expansion is performed, namely, a pod is added; otherwise, if the value is smaller than a certain threshold value for a period of time, the capacity reduction is performed, namely, one pod is exited. When a pod is contracted, the pod first enters a stop (stopping) state, and no task request is accepted, and the pod exits after the running task is completed.
According to the example in the second step above, the service server may have 1 pod persistent standby allocated up to 1GB, and when the waiting queue length is greater than the threshold, expand the capacity and limit that there are up to 4 such pods, so that the resource occupation does not exceed the expected 4GB.
Fourth step: load balancing among multiple pod.
With multiple pod, there is a load balancing problem, and the problem of fairness in request processing cannot be solved well by directly using a polling or hash algorithm. Since in the embodiment of the application, the main service is responsible for distributing tasks, the main service can be distributed according to the load of each pod. Each pod can report the length of the wait queue of the pod to the database or the main service at regular time, and the main service can distribute the tasks to the pod with the minimum length after obtaining the tasks. And can be further optimized because the queue ∈degree and the waiting time are not strongly correlated, but the total amount of data to be processed and the waiting time are strongly correlated by the test, so the queue length can be quantified by the amount of data to be processed, as shown in the following equation (1).
Q(i)=∑ t∈T(i) L(t) (1)
The data amount to be processed of the ith pod of the Q (i) in the formula (1), T (i) represents all the tasks to be processed of the ith pod, and L (T) represents the size of the saved reasoning result of the task T, which may be the size of the data set (per thousand pieces), or the size of the generated file (per 100 MB).
By separating out the auxiliary service which is specially used for index recalculation, carrying out resource restriction and dynamic capacity expansion on the pod of the auxiliary service, and adding a load balancing mechanism, the embodiment of the application achieves better balance among service availability, resource waste proportion and request processing fairness.
The above-mentioned embodiment of the present application can rapidly recalculate after adjusting the evaluation parameters, so that the experience of the model evaluator is greatly improved; in addition, the recalculation after adjusting the evaluation parameters realizes Pod dynamic starting and load balancing, and does not occupy cluster resources additionally in idle time
Step S306, a second model evaluation request for requesting to evaluate the second model is obtained, and the second model is obtained according to the second model evaluation request.
Model evaluation can be understood to include three parts: 1. using model reasoning to obtain data of the model reasoning (the application is also referred to as reasoning results); 2. calculating model indexes (also referred to as index generation in the application) by using the data of model reasoning; 3. and displaying various indexes of the model to a model evaluator. In the present application, the above three parts realize decoupling.
In step S307, if the model type of the second model is the same as the model type of the first model, the inference code is obtained from the inference code set.
Specifically, for convenience of description and understanding, the first model is exemplified below by a general object detection model, and the second model is exemplified below by a helmet inspection model. Assuming that the service server already has a generic object detection scenario, a new development of a helmet detection scenario is required.
Obviously, the first model and the second model are both detection class scenes, so that the business server can multiplex the reasoning steps of the first model without developing reasoning codes again. Referring to fig. 9, fig. 9 is a schematic diagram of a scenario of data processing according to an embodiment of the present application. As shown in fig. 9, the service server may construct an inference code database that may include a plurality of general inference codes, such as a general inference code corresponding to an image processing type, a general inference code corresponding to a voice processing type, and a general processing code corresponding to a text processing type.
Step S308, if the reasoning scene of the second model is different from the reasoning scene of the first model, acquiring a second index generating code associated with the reasoning scene of the second model from the index generating code set; the reasoning scene of the second model belongs to the reasoning scene under the model type of the first model; the first index generation code and the second index generation code have the same index generation code, and the first index generation code refers to a code associated with an inference scene of the first model in the index generation code set.
The business server may build an index generation code database that may include a plurality of initial index generation codes, such as codes for generating recall (fig. 9 represented by a detect universal recall), codes for generating a characterization PR curve (fig. 9 represented by a detect universal precision-recall curve), and codes for generating model reliability in a helmet inspection scenario. As shown in fig. 9, the first index generation code may include code for generating a recall rate and code for generating a PR curve characterizing the PR curve, and the second index generation code may include code for generating a recall rate and code for generating model reliability in a helmet inspection scenario. Obviously, part of index calculation codes can be multiplexed because of all detection class scenes.
Step S309, generating a third index for the second model according to the reasoning codes and the second index generating codes; the third index is used to indicate the model reasoning capabilities of the second model.
Specifically, the process of generating the third index by the service server is the same as the process of generating the first index by the service server, so please refer to the description of the process of generating the first index above, and the description thereof will not be repeated here.
Because the embodiment of the present application uses a unified and flexible index specification, the index display code for the third index also does not need to be developed, and the specific display manner is described in step S103 in the embodiment corresponding to fig. 3.
It can be understood that the embodiment of the application completely decouples reasoning, index calculation and index display in the model evaluation process, so that the model evaluation is not required to be changed when a scene is newly added, and an algorithm only needs to correspondingly adjust the reasoning code or the index calculation code, thereby greatly accelerating the process of accessing the new scene and greatly reducing the learning cost of the model evaluation.
It will be appreciated that embodiments of the present application, such as the embodiments of fig. 2 a-2 b, and the embodiments of fig. 3, 5, and 8, respectively, may be combined to create new embodiments.
In the embodiment of the application, the decoupling relation is formed between the reasoning code used for generating the reasoning result of the first model aiming at the evaluation data set and the first index generation code, so that when the second value used for updating the first value is acquired, the embodiment of the application does not need to rerun the reasoning code, and can directly run the first index generation code through the decoupling relation to generate the second index related to the reasoning result and the second value. As can be seen from the above, by performing decoupling processing on the inference code and the first index generation code, repeated generation of the inference result can be avoided when a new value of the dynamic parameter is obtained, so that waste of computing resources can be reduced; in addition, the index can be rapidly recalculated according to the generated reasoning result, so that the generation efficiency of the second index can be improved.
Further, referring to fig. 10, fig. 10 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing device 1 may be a computer program (comprising program code) running in a computer apparatus, for example the data processing device 1 is an application software; the data processing device 1 may be adapted to perform the respective steps of the method provided by the embodiments of the application. As shown in fig. 10, the data processing apparatus 1 may include: a first acquisition module 11, a first generation module 12, a second generation module 13, a second acquisition module 14, and a third generation module 15.
A first obtaining module 11, configured to obtain a first model evaluation request for requesting to evaluate a first model, obtain the first model according to the first model evaluation request, and be used for evaluating an evaluation dataset of the first model;
a first generating module 12, configured to run an inference code associated with a model type of the first model, generate an inference result of the first model for the evaluation dataset, and generate a parameter list associated with the inference result; the parameter list comprises a first value of the dynamic parameter;
a second generating module 13, configured to run a first index generating code associated with the first model by using the reasoning result, and generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation;
A second obtaining module 14, configured to obtain a second value for updating the first value according to the parameter updating request for the first value;
a third generating module 15, configured to use the reasoning result based on the decoupling relationship, run the first index generating code, and generate a second index associated with the reasoning result and the second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
The specific functional implementation manners of the first obtaining module 11, the first generating module 12, the second generating module 13, the second obtaining module 14, and the third generating module 15 may be referred to the step S101-step S105 in the corresponding embodiment of fig. 3, and will not be described herein.
Referring again to fig. 10, the data processing apparatus 1 may further include: a third acquisition module 16, a fourth acquisition module 17 and a fourth generation module 18.
A third obtaining module 16, configured to obtain a second model evaluation request for requesting to evaluate a second model, and obtain the second model according to the second model evaluation request;
the third obtaining module 16 is further configured to obtain an inference code from the inference code set if the model type of the second model is the same as the model type of the first model;
A fourth obtaining module 17, configured to obtain, in the index generating code set, a second index generating code associated with the inference scene of the second model if the inference scene of the second model is different from the inference scene of the first model; the reasoning scene of the second model belongs to the reasoning scene under the model type of the first model; the first index generating code and the second index generating code are the same, and the first index generating code refers to a code associated with an inference scene of the first model in the index generating code set;
a fourth generation module 18 for generating a third index for the second model from the inference code and the second index generation code; the third index is used to indicate the model reasoning capabilities of the second model.
The specific functional implementation manners of the third acquiring module 16, the fourth acquiring module 17 and the fourth generating module 18 may refer to step S306-step S309 in the corresponding embodiment of fig. 8, and are not described herein.
Referring again to fig. 10, the first generating module 12 may include: a first acquisition unit 121, a first operation unit 122, a parameter addition unit 123, and a second operation unit 124.
A first acquisition unit 121 for acquiring a first parameter generation code associated with an inference scene of a first model;
the first operation unit 122 is configured to, if the first model evaluation request does not carry the new parameter name, operate a first parameter generation code including the first input parameter with the reasoning result and the evaluation dataset as the first input parameter of the first parameter generation code, and generate a parameter list;
a parameter adding unit 123, configured to, if the first model evaluation request carries a new parameter name, add the new parameter name to the first parameter generating code, and obtain a second parameter generating code for the first model;
and a second operation unit 124, configured to use the reasoning result and the evaluation dataset as second input parameters of the second parameter generation code, and operate the second parameter generation code including the second input parameters to generate a parameter list.
The specific functional implementation manners of the first obtaining unit 121, the first running unit 122, the parameter adding unit 123, and the second running unit 124 may refer to step S102 in the corresponding embodiment of fig. 3, and are not described herein.
Referring again to fig. 10, the second generating module 13 may include: a first generation unit 131, a third operation unit 132, and a first determination unit 133.
A first generation unit 131 for generating a first index generation code associated with the first model;
a third operation unit 132 for generating a prediction result by operating the first index generating code including the third input parameter with the reasoning result and the parameter list as the third input parameter of the first index generating code;
a first determining unit 133 for determining a result error between the predicted result and a correct result in the evaluation dataset, and determining a first index based on the result error.
The specific functional implementation manners of the first generating unit 131, the third operating unit 132, and the first determining unit 133 may refer to step S103 in the corresponding embodiment of fig. 3, and are not described herein.
Referring again to fig. 10, the first generating unit 131 may include: a first acquisition subunit 1311, a first determination subunit 1312, and a second determination subunit 1313.
A first obtaining subunit 1311, configured to obtain, in the index generating code set, an initial index generating code associated with the inference scene of the first model;
a first determining subunit 1312, configured to determine, if the first model evaluation request does not carry the new index name, the initial index generation code as the first index generation code;
The second determining subunit 1313 is configured to generate a new index generating code for the new index name if the first model evaluation request carries the new index name, and determine the new index generating code and the initial index generating code as the first index generating code.
The specific functional implementation manners of the first acquiring subunit 1311, the first determining subunit 1312, and the second determining subunit 1313 may refer to step S103 in the corresponding embodiment of fig. 3, which is not described herein.
Referring again to fig. 10, the second generating module 13 may further include: a second generation unit 134, a third generation unit 135, and an association storage unit 136.
A second generating unit 134, configured to obtain an overall indicator display type, and generate an overall indicator corresponding to the first indicator according to the overall indicator display type; the overall index comprises an index name corresponding to the first index and an index value corresponding to the first index;
a third generating unit 135, configured to obtain an index advanced display type, and generate an advanced index corresponding to the first index through the index advanced display type; the advanced index is used for indicating the index displayed in a chart mode, and comprises index names corresponding to each evaluation type in the evaluation data set and index values corresponding to each evaluation type; the index name corresponding to each evaluation type and the index value corresponding to each evaluation type belong to the first index;
And the association storage unit 136 is configured to store the overall index and the advanced index in association.
The specific functional implementation manner of the second generating unit 134, the third generating unit 135 and the association storage unit 136 may refer to step S103 in the corresponding embodiment of fig. 3, and will not be described herein.
Referring again to fig. 10, the data processing apparatus 1 may further include: a fifth acquisition module 19, a fifth generation module 20, a result determination module 21 and an index establishment module 22.
A fifth obtaining module 19, configured to obtain an inference result processing code associated with the inference scenario of the first model, and take the inference result and the parameter list as a fourth input parameter of the inference result processing code;
a fifth generating module 20, configured to run an inference result processing code including the fourth input parameter, generate attribute information corresponding to the inference result, and an inference conclusion corresponding to the inference result;
a result determining module 21, configured to determine the inference conclusion, the first index, and the prediction result as an evaluation result; the prediction result is generated based on the reasoning result and the parameter list;
the index establishing module 22 is configured to establish an index relationship between the attribute information and the evaluation result, and store the attribute information and the evaluation result with the index relationship in an evaluation task file corresponding to the first model; the evaluation task file is used for returning a target evaluation result associated with key query information carried by the evaluation result query request in the evaluation result when the evaluation result query request aiming at the first model is obtained.
The specific functional implementation manners of the fifth obtaining module 19, the fifth generating module 20, the result determining module 21, and the index establishing module 22 may refer to step S203-step S206 in the corresponding embodiment of fig. 5, and are not described herein.
Referring again to fig. 10, the data processing apparatus 1 may further include: the sixth acquisition module 23 and the information matching module 24.
A sixth obtaining module 23, configured to obtain an evaluation result query request for the first model; the evaluating result query request carries key query information;
the information matching module 24 is used for matching the key query information with the attribute information according to the query request of the evaluation result;
the information matching module 24 is further configured to obtain, from the evaluation results, a target evaluation result having an index relationship with the target attribute information if the target attribute information in the attribute information matches the key query information.
The specific functional implementation manner of the sixth obtaining module 23 and the information matching module 24 may refer to step S206 in the corresponding embodiment of fig. 5, which is not described herein.
Referring to fig. 10 again, the number of the first index generating codes is a, and a is a positive integer; the A first index generating codes comprise first index generating codes B c The method comprises the steps of carrying out a first treatment on the surface of the c is a positive integer, and c is less than or equal to A; the number of the first indexes is A; the A first indexes comprise running first index generating codes B c The first index E is generated c
The third generating module 15 may include: a second acquisition unit 151, a first matching unit 152, and a fourth generation unit 153.
A second acquisition unit 151 for acquiring the first index generation code B c Influence parameter name list D c The method comprises the steps of carrying out a first treatment on the surface of the Influence parameter name list D c Includes means for generating a first index E c Parameter names of (2);
a first matching unit 152 for matching the influence parameter name list D c The parameter name is matched with the second value;
fourth generation unit 153 for generating parameter name list D if influence c Including parameter namesIf the parameter name matched with the second value exists, generating a code B from the reasoning result and the first index of the second value through a decoupling relation c As a fifth input parameter;
the fourth generating unit 153 is further configured to execute a first index generating code B including a fifth input parameter c Generating a prediction result F c For the predicted result F c And comparing the correct results in the evaluation data set to obtain a second index.
The specific functional implementation manner of the second obtaining unit 151, the first matching unit 152, and the fourth generating unit 153 may refer to step S105 in the corresponding embodiment of fig. 3, and will not be described herein.
Referring again to fig. 10, the third generating module 15 may include: a second matching unit 154, a third acquisition unit 155, and a fifth generation unit 156.
A second matching unit 154, configured to match, through a decoupling relationship, the second valued parameter name and the inference result name of the inference result;
a third obtaining unit 155, configured to obtain, from among the inference result names, a local inference result name that matches the second valued parameter name, and obtain, from among the inference results, a local inference result indicated by the local inference result name;
the fifth generating unit 156 is configured to generate the second index by running the first index generating code including the sixth input parameter with the local reasoning result and the second value as the sixth input parameter of the first index generating code.
The specific functional implementation manners of the second matching unit 154, the third obtaining unit 155 and the fifth generating unit 156 may refer to step S209 in the corresponding embodiment of fig. 5, and are not described herein.
Referring again to fig. 10, the third generating module may include: a second determination unit 157, a sixth generation unit 158, and a seventh generation unit 159.
A second determining unit 157 for determining a first index generating code and an inference result through a decoupling relationship in response to the parameter update request in the main service;
A sixth generating unit 158, configured to generate an index regeneration request according to the second value, the first index generation code, and the inference result, and send the index regeneration request to the affiliated service;
the seventh generating unit 159 is configured to execute the first index generating code according to the index regeneration request in the affiliated service, and generate the second index associated with the reasoning result and the second value.
The specific functional implementation manner of the second determining unit 157, the sixth generating unit 158 and the seventh generating unit 159 may refer to step S305 in the corresponding embodiment of fig. 8, and will not be described herein.
Referring again to FIG. 10, the affiliated service includes at least two computing components;
the sixth generation unit 158 may include: a second acquisition subunit 1581, a third determination subunit 1582, and a first transmission subunit 1583.
The second obtaining subunit 1581 is configured to determine component states corresponding to at least two computing components, and obtain, in the at least two computing components, a computing component whose component state is a component start state, and generate a start computing component set from the obtained computing components;
a third determining subunit 1582, configured to determine G wait queue lengths for starting the computing components in the computing component set, and obtain a minimum wait queue length from the G wait queue lengths; starting a computing component in the computing component set to correspond to a waiting queue length; g is a positive integer, and G is less than or equal to the total number of at least two computing components;
The first sending subunit 1583 is configured to send the index re-receiving request to the first computing component corresponding to the minimum waiting queue length if the minimum waiting queue length is less than the waiting queue length threshold; the first computing component belongs to a starting computing component set;
the seventh generating unit 159 is specifically configured to execute, in the first computing component, the first index generating code according to the index regeneration request, and generate the second index associated with the reasoning result and the second value.
The specific function implementation manner of the second acquiring subunit 1581, the third determining subunit 1582, the first transmitting subunit 1583, and the seventh generating unit 159 may refer to step S305 in the corresponding embodiment of fig. 8, and will not be described herein.
Referring again to FIG. 10, the affiliated service includes at least two computing components;
the sixth generation unit 158 may include: a third acquisition subunit 1584, a fourth determination subunit 1585, and a second transmission subunit 1586.
The third obtaining subunit 1584 is configured to determine component states corresponding to at least two computing components, and obtain, in the at least two computing components, a computing component whose component state is a component start state, and generate a start computing component set from the obtained computing components;
A fourth determining subunit 1585, configured to determine G wait queue lengths of computing components in the set of starting computing components, and if an average wait queue length corresponding to the G wait queue lengths exceeds an average wait queue length threshold, start a computing component in which a component state is a component idle state in at least two computing components; starting a computing component in the computing component set to correspond to a waiting queue length; g is a positive integer, and G is less than or equal to the total number of at least two computing components;
a second sending subunit 1586, configured to determine the successfully started computing component as a second computing component, and send an index re-receiving request to the second computing component; updating the component state of the second computing component to a component start state;
the seventh generating unit 159 is specifically configured to execute, in the second computing component, the first index generating code according to the index regeneration request, and generate a second index associated with the reasoning result and the second value.
The specific function implementation manner of the third acquiring subunit 1584, the fourth determining subunit 1585, the second transmitting subunit 1586, and the seventh generating unit 159 may refer to step S305 in the corresponding embodiment of fig. 8, and will not be described herein.
Referring again to fig. 10, the data processing apparatus 1 may further include: a seventh acquisition module 25 and a step execution module 26.
A seventh obtaining module 25, configured to obtain, if the first value includes the second value, a second index corresponding to the second value from the first indexes;
the step execution module 26 is configured to execute a step of using the inference result to run the first index generation code based on the decoupling relationship to generate a second index associated with the inference result and the second value if the first value does not include the second value.
The specific function implementation manner of the seventh obtaining module 25 and the step executing module 26 may refer to step S104 in the corresponding embodiment of fig. 3, and will not be described herein.
In the embodiment of the application, the decoupling relation is formed between the reasoning code used for generating the reasoning result of the first model aiming at the evaluation data set and the first index generation code, so that when the second value used for updating the first value is acquired, the embodiment of the application does not need to rerun the reasoning code, and can directly run the first index generation code through the decoupling relation to generate the second index related to the reasoning result and the second value. As can be seen from the above, by performing decoupling processing on the inference code and the first index generation code, repeated generation of the inference result can be avoided when a new value of the dynamic parameter is obtained, so that waste of computing resources can be reduced; in addition, the index can be rapidly recalculated according to the generated reasoning result, so that the generation efficiency of the second index can be improved.
Further, referring to fig. 11, fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 11, the computer device 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, a memory 1005, at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. In some embodiments, the user interface 1003 may include a Display (Display), a Keyboard (Keyboard), and the network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may also optionally be at least one storage device located remotely from the aforementioned processor 1001. As shown in fig. 11, the memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a device control application.
In the computer device 1000 shown in FIG. 11, the network interface 1004 may provide network communication functions; while user interface 1003 is primarily used as an interface for providing input to a user; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
Acquiring a first model evaluation request for requesting to evaluate a first model, acquiring the first model according to the first model evaluation request, and evaluating a data set for evaluating the first model;
running an inference code associated with the model type of the first model, generating an inference result of the first model for the evaluation dataset, and generating a parameter list associated with the inference result; the parameter list comprises a first value of the dynamic parameter;
operating a first index generation code associated with the first model by utilizing the reasoning result to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generation codes are in decoupling relation;
acquiring a second value for updating the first value according to the parameter updating request aiming at the first value;
operating a first index generation code based on the decoupling relation by utilizing the reasoning result to generate a second index related to the reasoning result and the second value; the first index and the second index are used for indicating the model reasoning capacity of the first model.
It should be understood that the computer device 1000 described in the embodiments of the present application may perform the description of the data processing method or apparatus in the foregoing embodiments, and will not be repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
The embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the description of the data processing method or apparatus in each of the foregoing embodiments is implemented, and will not be repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
The computer readable storage medium may be the data processing apparatus provided in any one of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the computer device may perform the description of the data processing method or apparatus in the foregoing embodiments, which is not described herein. In addition, the description of the beneficial effects of the same method is omitted.
The terms first, second and the like in the description and in the claims and drawings of embodiments of the application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.

Claims (17)

1. A method of data processing, comprising:
acquiring a first model evaluation request for requesting to evaluate a first model, acquiring the first model according to the first model evaluation request, and evaluating an evaluation data set for evaluating the first model;
Running an inference code associated with a model type of the first model, generating an inference result of the first model for the evaluation dataset, and generating a parameter list associated with the inference result; the parameter list comprises a first value of a dynamic parameter;
operating a first index generation code associated with the first model by utilizing the reasoning result to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generating codes are in decoupling relation;
acquiring a second value for updating the first value according to a parameter updating request aiming at the first value;
operating the first index generation code based on the decoupling relation by utilizing the reasoning result to generate a second index associated with the reasoning result and the second value; the first index and the second index are used for indicating model reasoning capacity of the first model.
2. The method according to claim 1, wherein the method further comprises:
acquiring a second model evaluation request for requesting to evaluate a second model, and acquiring the second model according to the second model evaluation request;
If the model type of the second model is the same as the model type of the first model, acquiring the reasoning codes from a reasoning code set;
if the reasoning scene of the second model is different from the reasoning scene of the first model, acquiring a second index generating code associated with the reasoning scene of the second model in an index generating code set; the reasoning scenes of the second model belong to the reasoning scenes under the model type of the first model; the first index generating code and the second index generating code have the same index generating code, and the first index generating code refers to a code associated with an inference scene of the first model in the index generating code set;
generating a code according to the reasoning code and the second index, and generating a third index aiming at the second model; the third index is used to indicate model reasoning capabilities of the second model.
3. The method of claim 1, wherein the generating the list of parameters associated with the inference result comprises:
acquiring a first parameter generation code associated with an inference scene of the first model;
If the first model evaluation request does not carry a new parameter name, the reasoning result and the evaluation data set are used as first input parameters of the first parameter generation code, and a first parameter generation code containing the first input parameters is operated to generate a parameter list;
if the first model evaluation request carries the newly-added parameter name, adding the newly-added parameter name into the first parameter generation code to obtain a second parameter generation code aiming at the first model;
and taking the reasoning result and the evaluation data set as second input parameters of the second parameter generation codes, and running the second parameter generation codes containing the second input parameters to generate a parameter list.
4. The method of claim 1, wherein using the inference results to run a first index generation code associated with the first model to generate a first index associated with the inference results and the list of parameters comprises:
generating a first index generation code associated with the first model;
taking the reasoning result and the parameter list as a third input parameter of the first index generation code, and operating the first index generation code containing the third input parameter to generate a prediction result;
And determining a result error between the predicted result and a correct result in the evaluation data set, and determining a first index according to the result error.
5. The method of claim 4, wherein the generating a first metric generation code associated with the first model comprises:
acquiring initial index generation codes associated with the reasoning scenes of the first model from an index generation code set;
if the first model evaluation request does not carry the new index name, determining the initial index generation code as a first index generation code;
if the first model evaluation request carries a new index name, generating a new index generation code aiming at the new index name, and determining the new index generation code and the initial index generation code as a first index generation code.
6. The method according to claim 4, wherein the method further comprises:
acquiring an index integral display type, and generating an integral index corresponding to the first index through the index integral display type; the overall index comprises an index name corresponding to the first index and an index value corresponding to the first index;
Acquiring an index advanced display type, and generating an advanced index corresponding to the first index through the index advanced display type; the advanced index is used for indicating an index displayed in a chart mode, and comprises index names corresponding to each evaluation type in the evaluation data set and index values corresponding to each evaluation type; the index name corresponding to each evaluation type and the index value corresponding to each evaluation type belong to the first index;
and carrying out association storage on the integral index and the advanced index.
7. The method according to claim 1, wherein the method further comprises:
acquiring an inference result processing code associated with an inference scene of the first model, and taking the inference result and the first parameter as a fourth input parameter of the inference result processing code;
operating an inference result processing code containing the fourth input parameter, and generating attribute information corresponding to the inference result and an inference conclusion corresponding to the inference result;
determining the reasoning conclusion, the first index and the prediction result as evaluation results; the prediction result is generated based on the reasoning result and the parameter list;
Establishing an index relation between the attribute information and the evaluation result, and storing the attribute information and the evaluation result with the index relation in an evaluation task file corresponding to the first model; and the evaluation task file is used for returning a target evaluation result associated with the key query information carried by the evaluation result query request in the evaluation result when the evaluation result query request aiming at the first model is acquired.
8. The method of claim 7, wherein the method further comprises:
acquiring an evaluation result query request aiming at the first model; the evaluation result query request carries key query information;
according to the evaluation result query request, matching the key query information with the attribute information;
and if the target attribute information in the attribute information is matched with the key query information, acquiring a target evaluation result with an index relation with the target attribute information from the evaluation results.
9. The method of claim 1, wherein the first index generating code is a number, a being a positive integer; the A first index generating codes comprise first index generating codes B c The method comprises the steps of carrying out a first treatment on the surface of the c is a positive integer, and c is less than or equal to A; the number of the first indexes is A; a first indexes include running the first index generating code B c The first index E is generated c
The step of running the first index generation code by using the reasoning result based on the decoupling relation to generate a second index associated with the reasoning result and the second value comprises the following steps:
acquiring the first index generation code B c Influence parameter name list D c The method comprises the steps of carrying out a first treatment on the surface of the The influence parameter name list D c Includes means for generating the first index E c Parameter names of (2);
list D of the influence parameter names c The included parameter name is matched with the second value;
if the influence parameter name list D c The parameter names matched with the second value exist in the parameter names, and the reasoning result and the second value are used for generating a code B through the decoupling relation c As a fifth input parameter;
running a first index generating code B containing the fifth input parameter c Generating a prediction result F c For the prediction result F c And comparing the correct results in the evaluation data set to obtain a second index.
10. The method of claim 1, wherein the running the first index generation code using the inference result based on the decoupling relationship generates a second index associated with the inference result and the second value, comprising:
matching the second valued parameter name with the reasoning result name of the reasoning result through the decoupling relation;
obtaining a local reasoning result name matched with the parameter name with the second value from the reasoning result names, and obtaining a local reasoning result indicated by the local reasoning result name from the reasoning results;
and taking the local reasoning result and the second value as a sixth input parameter of the first index generation code, and running the first index generation code containing the sixth input parameter to generate a second index.
11. The method of claim 1, wherein the running the first index generation code using the inference result based on the decoupling relationship generates a second index associated with the inference result and the second value, comprising:
responding to the parameter updating request in the main service, and determining the first index generating code and the reasoning result through the decoupling relation;
Generating an index regeneration request according to the second value, the first index generation code and the reasoning result, and sending the index regeneration request to an affiliated service;
and in the affiliated service, according to the index regeneration request, running the first index generation code to generate a second index related to the reasoning result and the second value.
12. The method of claim 11, wherein the affiliated service comprises at least two computing components;
the sending the index regeneration request to an affiliated service includes:
determining component states corresponding to the at least two computing components respectively, acquiring the computing components with the component states being component starting states in the at least two computing components, and generating a starting computing component set from the acquired computing components;
determining G waiting queue lengths of computing components in the starting computing component set, and acquiring the minimum waiting queue length from the G waiting queue lengths; a waiting queue length is corresponding to one computing component in the starting computing component set; g is a positive integer, and G is less than or equal to the total number of the at least two computing components;
If the minimum waiting queue length is smaller than the waiting queue length threshold, the index re-receiving request is sent to a first computing component corresponding to the minimum waiting queue length; the first computing component belongs to the starting computing component set;
and in the affiliated service, according to the index regeneration request, running the first index generation code to generate a second index associated with the reasoning result and the second value, including:
and in the first computing component, according to the index regeneration request, running the first index generation code to generate a second index associated with the reasoning result and the second value.
13. The method of claim 11, wherein the affiliated service comprises at least two computing components;
the sending the index regeneration request to an affiliated service includes:
determining component states corresponding to the at least two computing components respectively, acquiring the computing components with the component states being component starting states in the at least two computing components, and generating a starting computing component set from the acquired computing components;
Determining G wait queue lengths of the computing components in the starting computing component set, and if the average wait queue length corresponding to the G wait queue lengths exceeds an average wait queue length threshold, starting a computing component with a component idle state in the at least two computing components; a waiting queue length is corresponding to one computing component in the starting computing component set; g is a positive integer, and G is less than or equal to the total number of the at least two computing components;
determining a successfully started computing component as a second computing component, and sending the index re-receiving request to the second computing component; updating the component state of the second computing component to the component starting state;
and in the affiliated service, according to the index regeneration request, running the first index generation code to generate a second index associated with the reasoning result and the second value, including:
and in the second computing component, according to the index regeneration request, running the first index generation code to generate a second index associated with the reasoning result and the second value.
14. A data processing apparatus, comprising:
the first acquisition module is used for acquiring a first model evaluation request for requesting to evaluate a first model, acquiring the first model according to the first model evaluation request and evaluating an evaluation data set of the first model;
the first generation module is used for running an inference code associated with the model type of the first model, generating an inference result of the first model aiming at the evaluation data set, and generating a parameter list associated with the inference result; the parameter list comprises a first value of a dynamic parameter;
the second generation module is used for running a first index generation code associated with the first model by utilizing the reasoning result to generate a first index associated with the reasoning result and the parameter list; the reasoning codes and the first index generating codes are in decoupling relation; the reasoning scene of the first model belongs to the reasoning scene under the model type of the first model;
the second acquisition module is used for acquiring a second value for updating the first value according to a parameter updating request aiming at the first value;
The third generation module is used for running the first index generation code based on the decoupling relation to generate a second index related to the reasoning result and the second value; the first index and the second index are used for indicating model reasoning capacity of the first model.
15. A computer device, comprising: a processor, a memory, and a network interface;
the processor is connected to the memory and the network interface, wherein the network interface is configured to provide a data communication function, the memory is configured to store a computer program, and the processor is configured to invoke the computer program to cause the computer device to perform the method of any of claims 1 to 13.
16. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1-13.
17. A computer program product, characterized in that the computer program product comprises a computer program stored in a computer readable storage medium, the computer program being adapted to be read and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1-13.
CN202310096599.2A 2023-01-17 2023-01-17 Data processing method, device, equipment and computer readable storage medium Pending CN116974898A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310096599.2A CN116974898A (en) 2023-01-17 2023-01-17 Data processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310096599.2A CN116974898A (en) 2023-01-17 2023-01-17 Data processing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116974898A true CN116974898A (en) 2023-10-31

Family

ID=88475440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310096599.2A Pending CN116974898A (en) 2023-01-17 2023-01-17 Data processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116974898A (en)

Similar Documents

Publication Publication Date Title
CN110059172B (en) Method and device for recommending answers based on natural language understanding
CN113742488B (en) Embedded knowledge graph completion method and device based on multitask learning
CN110909222A (en) User portrait establishing method, device, medium and electronic equipment based on clustering
CN111427911A (en) Data query method and device, computer equipment and storage medium
CN115239214A (en) Enterprise evaluation processing method and device and electronic equipment
CN113327132A (en) Multimedia recommendation method, device, equipment and storage medium
CN116467607B (en) Information matching method and storage medium
CN116738044A (en) Book recommendation method, device and equipment for realizing college library based on individuation
CN116450723A (en) Data extraction method, device, computer equipment and storage medium
CN116956204A (en) Network structure determining method, data predicting method and device of multi-task model
CN112948251B (en) Automatic software testing method and device
CN116974898A (en) Data processing method, device, equipment and computer readable storage medium
CN115168609A (en) Text matching method and device, computer equipment and storage medium
CN112487277B (en) Data distribution method and device, readable storage medium and electronic equipment
CN113723114A (en) Semantic analysis method, device and equipment based on multi-intent recognition and storage medium
CN112905792A (en) Text clustering method, device and equipment based on non-text scene and storage medium
CN113468258A (en) Heterogeneous data conversion method and device and storage medium
CN113821656A (en) Image processing method and device based on artificial intelligence and electronic equipment
CN111881174A (en) Device and method for providing distributed NLP capability service
CN115795184B (en) RPA-based scene get-on point recommendation method and device
CN106528577B (en) Method and device for setting file to be cleaned
CN114519105B (en) Concept word determining method and device, electronic equipment and storage medium
CN114385906B (en) Prediction method, recommendation method, device, equipment and storage medium
CN112948579B (en) Method, device, system and computer equipment for processing message text information
CN114581706B (en) Method and device for configuring certificate recognition model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication