CN113554180B - Information prediction method, information prediction device, electronic equipment and storage medium - Google Patents

Information prediction method, information prediction device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113554180B
CN113554180B CN202110738469.5A CN202110738469A CN113554180B CN 113554180 B CN113554180 B CN 113554180B CN 202110738469 A CN202110738469 A CN 202110738469A CN 113554180 B CN113554180 B CN 113554180B
Authority
CN
China
Prior art keywords
prediction
model
information
service
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110738469.5A
Other languages
Chinese (zh)
Other versions
CN113554180A (en
Inventor
王洋
雍高鹏
李殿亚
孙叔琦
李婷婷
常月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110738469.5A priority Critical patent/CN113554180B/en
Publication of CN113554180A publication Critical patent/CN113554180A/en
Application granted granted Critical
Publication of CN113554180B publication Critical patent/CN113554180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities

Abstract

The present disclosure provides an information prediction method, an information prediction apparatus, an electronic device and a storage medium, which relate to the technical field of computers, in particular to the technical fields of artificial intelligence such as deep learning and big data processing, and the specific implementation scheme is as follows: receiving an information prediction request; determining a predicted service class corresponding to the information prediction request; determining a model loading mode corresponding to the prediction service type; and loading the target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service category after loading so as to obtain result information according to the prediction request. Therefore, in an application scenario where prediction requests are not frequent, compared with a technical scheme of preloading all models by a single prediction service, the embodiment of the disclosure can greatly reduce resource consumption under the condition that the average response time is slightly prolonged. And the complex logic is decoupled, the complexity of single service information prediction is reduced, and the research and development efficiency is effectively improved.

Description

Information prediction method, information prediction device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the field of artificial intelligence technologies such as deep learning and big data processing, and in particular, to an information prediction method and apparatus, an electronic device, and a storage medium.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning technology, a deep learning technology, a big data processing technology, a knowledge map technology and the like.
In the related art, in the information prediction process, a prediction model is usually loaded into a local memory of an information prediction platform in advance, so as to provide a corresponding information prediction service.
Disclosure of Invention
The disclosure provides an information prediction method, an information prediction apparatus, an electronic device, a storage medium, and a computer program product.
According to a first aspect of the present disclosure, there is provided an information prediction method, including: receiving an information prediction request; determining a predicted service class corresponding to the information prediction request; determining a model loading mode corresponding to the prediction service type; and loading the target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service category after loading so as to obtain result information according to the prediction request.
According to a second aspect of the present disclosure, there is provided an information prediction apparatus including: the receiving module is used for receiving an information prediction request; a first determining module, configured to determine a predicted service class corresponding to the information prediction request; the second determining module is used for determining a model loading mode corresponding to the predicted service type; and the loading module is used for loading the target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service category after being loaded so as to obtain result information according to the prediction request.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the information prediction method as in the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the information prediction method as in the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the information prediction method as in the first aspect.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is an architectural diagram of an information forecasting platform in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a prediction microservice in an embodiment of the present disclosure;
FIG. 6 is a schematic illustration of a fourth embodiment according to the present disclosure;
FIG. 7 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 8 shows a schematic block diagram of an example electronic device that may be used to implement the information prediction methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure.
It should be noted that the main execution body of the information prediction method of this embodiment is an information prediction apparatus, the apparatus may be implemented by software and/or hardware, the apparatus may be configured in an electronic device, and the electronic device may include, but is not limited to, a terminal, a server, and the like.
The embodiment of the disclosure relates to the technical field of artificial intelligence such as deep learning and big data processing.
Wherein, Artificial Intelligence (Artificial Intelligence), english is abbreviated as AI. The method is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence.
Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final goal of deep learning is to make a machine capable of human-like analytical learning, and to recognize data such as characters, images, and sounds.
The big data processing refers to a process of analyzing and processing large-scale data in an artificial intelligence mode, and the big data can be summarized into 5V data with large Volume, high speed (Velocity), multiple types (Variety), Value (Value) and authenticity (Veracity).
In this embodiment, the execution subject of the information prediction method may obtain the information in various public and legal compliance manners, for example, the information may be obtained from a public information set, or obtained from the user after authorization of the user. This information does not reflect the personal information of a particular user.
As shown in fig. 1, the information prediction method includes:
s101: an information prediction request is received.
The prediction request for predicting information may be referred to as an information prediction request, and the information may be any information that can be predicted, such as passenger flow volume information, weather information, environment information, and the like, without limitation.
It should be noted that, the information prediction requests in the embodiments of the present disclosure are all obtained in compliance with relevant laws and regulations.
The information prediction request may, for example, provide an information prediction interface in advance by the information prediction apparatus, and receive information of user demand prediction based on the information prediction interface, and then, may convert the information of user demand prediction into a request message via the information prediction apparatus, and obtain corresponding information from the background information prediction service based on the request message, where the request message may be referred to as an information prediction request, which is not limited thereto.
For example, an application scenario of the embodiment of the present disclosure may be, for example: each time a user initiates an information prediction request, the information prediction apparatus may obtain a set of prediction models in response to the information prediction request, in response to the information prediction request of the user.
The information prediction device may be configured as an intelligent dialog customization platform, or may be a customization platform of any other type of artificial intelligence technology, which is not limited in this respect.
The following description of the embodiments of the present disclosure may be exemplified by the above application scenario, and the embodiments of the present disclosure may also be applied to any other possible information prediction application scenario, which is not limited thereto.
S102: a predicted service class corresponding to the information prediction request is determined.
The information to be predicted is assumed to be passenger flow volume information, weather information and environment information, and the types of reference data and prediction processing logics related to the prediction of the information to be predicted are different due to different categories of the information to be predicted, so that based on the difference of the categories of the information to be predicted, a targeted prediction service can be configured, and prediction services for predicting different categories of information to be predicted can correspond to different prediction service categories, such as a predicted passenger flow volume category, a predicted weather category and a predicted environment category, without limitation.
That is to say, in the embodiment of the present disclosure, after receiving the information prediction request, the prediction service class corresponding to the information prediction request may be determined, and then the adaptive model loading manner may be determined based on the prediction service class.
For example, feature extraction may be performed on the information prediction request and the prediction service class to obtain a feature vector corresponding to the information prediction request and a class vector corresponding to the prediction service class, a similarity between the feature vector and the class vector may be obtained by calculating a vector cosine between the extracted feature vector and the class feature vector, the similarity between the calculated feature vector and the class vector is compared with a preset similarity threshold, and if the similarity is smaller than the preset threshold, the prediction service class corresponding to the prediction information request may be determined.
Of course, any other possible manner may be adopted to implement the method, and the prediction service category corresponding to the information prediction request is determined in this embodiment, which is not limited to this.
S103: and determining a model loading mode corresponding to the prediction service class.
After the predicted service type corresponding to the information prediction request is determined, a model loading mode corresponding to the predicted service type can be determined.
The method and/or form of model loading may be referred to as a model loading manner.
In some embodiments, when determining the model loading manner corresponding to the predicted service class, the loading manner of the model may be determined by determining information such as a storage location of the model corresponding to the predicted service class, a size of a resource occupied by the model, a real-time requirement for model loading, and the like, which is not limited herein.
For example, the storage location of the model corresponding to the predicted service category may be determined, and the loading manner of the model may be determined, and if it is determined that the storage location of the model a corresponding to the predicted service category is platform a, the loading manner of the model corresponding to the predicted service category a may be obtained by determining that the loading manner of the model corresponding to the platform a is manner 1, and that of the model corresponding to the predicted service category a is manner 1.
S104: and loading the target prediction model based on a model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service type after loading so as to obtain result information according to the information prediction request.
After the model loading manner corresponding to the prediction service category is determined, the target prediction model may be loaded based on the model loading manner.
Among them, a model loaded from a platform in response to an information prediction request of a user may be referred to as a target prediction model, which is capable of providing an information prediction service corresponding to the information prediction request.
In the embodiment, the information prediction request is received; determining a predicted service class corresponding to the information prediction request; determining a model loading mode corresponding to the prediction service type; and loading the target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service category after loading so as to obtain result information according to the prediction request. In an application scenario where prediction requests are not frequent, compared with a technical scheme of preloading all models by a single prediction service, the embodiment of the disclosure can greatly reduce resource consumption under the condition that the average response time is slightly prolonged. And the complex logic is decoupled, the complexity of single service information prediction is reduced, and the research and development efficiency is effectively improved.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure.
As shown in fig. 2, the information prediction method includes:
s201: an information prediction request is received.
For an example of S201, reference may be made to the foregoing embodiments, and details are not described herein again.
S202: and determining the information category to be predicted corresponding to the information prediction request.
After receiving the information prediction request, the information category to be predicted corresponding to the information prediction request can be determined.
Wherein, the information prediction request comprises: the type of the information to be predicted, that is, the information prediction platform may analyze, based on the information prediction request, that the information to be predicted is the passenger flow volume information, or the weather information, or the environment information, and the type used for describing the passenger flow volume information, or the weather information, or the environment information may be referred to as the type of the information to be predicted, which is not limited thereto.
In some other embodiments, the prediction micro service corresponding to the request to be predicted may also be analyzed, and then, the type of the information to be predicted is determined based on the model type capable of providing the prediction micro service, which is not limited herein.
For example, if the model type of the prediction micro service that can provide the prediction micro service is an a model if the prediction micro service a corresponding to the information prediction request a is a model, and if the model type of the prediction micro service that can provide the prediction micro service is a B model if the prediction micro service B corresponding to the information prediction request B is a model, the information category to be predicted that corresponds to the information prediction request a may be determined as an a category and the information category to be predicted that corresponds to the information prediction request B may be determined as a B category according to the prediction micro service that corresponds to the information to be predicted, which is not limited thereto.
S203: and determining candidate prediction information categories corresponding to the information categories to be predicted from the pre-configured corresponding relations.
In the information prediction process, the pre-configured prediction information category may be referred to as a candidate prediction information category, and the pre-configured correspondence between the information category to be predicted and the candidate prediction information category may be referred to as a pre-configured correspondence.
In the above-mentioned determination of the information category to be predicted corresponding to the information prediction request, a candidate prediction information category corresponding to the information category to be predicted may be determined from a pre-configured correspondence relationship.
For example, in the pre-configured corresponding relationship, the information category a to be predicted corresponds to the candidate prediction information category a, and the information category B to be predicted corresponds to the candidate prediction information category B, then the candidate prediction information category a corresponding to the prediction information category a and the candidate prediction information category B corresponding to the prediction information category B may be determined according to the pre-configured corresponding relationship.
S204: and determining at least one candidate prediction service class corresponding to the candidate prediction information class, and taking the at least one candidate prediction service class as the corresponding prediction service class.
In the information prediction process, the pre-configured predicted service class may be referred to as a candidate predicted service class.
After determining the candidate prediction information category corresponding to the information category to be predicted from the pre-configured corresponding relationship, at least one candidate prediction service category corresponding to the candidate prediction information category may be determined, and the at least one candidate prediction service category may be used as the corresponding prediction service category.
That is, the at least one candidate predicted service category corresponding to the candidate prediction information category may be determined according to a pre-configured correspondence, i.e., a correspondence between the candidate prediction information category and the at least one candidate predicted service category.
Therefore, the information to be predicted corresponding to the information prediction request is determined; determining candidate prediction information categories corresponding to the information categories to be predicted from the pre-configured corresponding relations; determining at least one candidate prediction service category corresponding to the candidate prediction information category, and taking the at least one candidate prediction service category as a corresponding prediction service category; the prediction service type corresponding to the information prediction request is determined according to the pre-configured corresponding relation, so that the boundary of the prediction service related to the information prediction request can be rapidly and accurately determined, the prediction model adapted to the prediction service is assisted to be rapidly called subsequently, the identification efficiency of the prediction service type is improved to a great extent, and the response efficiency of information prediction is improved.
S205: and determining a model loading mode corresponding to the prediction service class.
For an example of S205, reference may be made to the foregoing embodiments, and details are not described herein.
S206: a model class corresponding to the predicted service class is determined.
After the model loading manner corresponding to the predicted service class is determined, the model class corresponding to the predicted service class can be determined.
The category for describing the prediction service category corresponding model (which is a model capable of providing the corresponding category prediction service) may be referred to as a model category, where the model category may be a complex model category or a simple model category, which is not limited herein.
In other embodiments, the models may be classified according to other characteristics of the models, and the models may be models capable of providing information prediction services, such as neural network models in artificial intelligence, or machine learning models, which is not limited herein.
S207: if the model class is a first model class, a first target prediction model is loaded locally from the information prediction platform.
In this embodiment, the first model category is a complex model category, and correspondingly, the second model category is a simple model category, where the category is a prediction model of the first model category and may be referred to as a first target prediction model, and correspondingly, the category is a prediction model of the second model category and may be referred to as a second target prediction model, which is not limited to this.
That is, in this embodiment, the first resource consumption value corresponding to the first target prediction model is greater than the second resource consumption value corresponding to the second target prediction model. Therefore, different loading modes can be configured for different models according to the resource consumption values of the different models, so that the loading modes of the models can be adapted to different prediction scenes.
This embodiment may be described with reference to fig. 3, as shown in fig. 3, fig. 3 is a schematic diagram of an architecture of an information prediction platform according to an embodiment of the present disclosure, where the information prediction platform includes: model training, model deployment and business registration center, model prediction three major system components, wherein, the model training component is used for reading the data configured by the user to carry out the model training, after the training is finished, firstly, the model obtained by the training is stored in the file system of the model in a lasting way, and the local business data of the information prediction platform can be correspondingly updated, such as: the user customizes the version number and model path of the model at this time, and notifies the model deployment and service registry to perform the deployment service of model prediction
After receiving the notification of model training, the model deployment and service registration center actively reads service data of the platform, constructs a deployment environment for model prediction, and executes specific deployment logic according to the model category, if the model category is the complex model category, the complex model can be pushed to the local of the information prediction platform, if the model category is the simple model category, the simple model category can be written into the model storage middleware for persistent storage, and the model prediction component can be used for loading the model from the file system or the model storage middleware, so that the information prediction service can be executed when the information prediction request is received.
In this embodiment, if the model category is a complex model category, the first target prediction model may be locally loaded from the information prediction platform, and the first target prediction model may be pre-stored locally to the information prediction platform in the model training process.
S208: and if the model type is the second model type, loading a second target prediction model from model storage middleware, wherein the model storage middleware is used for storing the trained second target prediction model, and the first target prediction model is different from the second prediction model.
For example, the embodiment may be described in detail with reference to fig. 3, as shown in fig. 3, if the model type is a simple model type, the second target prediction model may be loaded from a model storage middleware, and the model storage middleware is used to store the trained second target prediction model.
Alternatively, in some embodiments, as shown in fig. 4, fig. 4 is a schematic diagram according to a third embodiment of the present disclosure, and if the model class is the second model class, the second target prediction model is loaded from among the model storage middleware.
In this embodiment, the model category corresponding to the prediction service category is determined, and based on different model categories, the target prediction model may be loaded from the information prediction platform local or model storage middleware, so that an adaptive model loading manner may be configured according to the model categories providing different information prediction services, flexibility of model loading is improved, and occupation of more storage resources due to premature model loading is avoided, so that the model loading manner is more adaptive to actual prediction scenario requirements, and meanwhile, compared with a simple manner of loading the model each time a request is received for a single service, a great increase in response time is avoided.
S401: it is determined whether a first target prediction model corresponding to a first model class exists locally on the information prediction platform.
After determining that the model class is the first model class, it may be determined whether a first target prediction model corresponding to the first model class exists locally on the information prediction platform.
S402: and if so, locally loading the first target prediction model from the information prediction platform.
When the information prediction platform locally has the first target prediction model corresponding to the first model type, the locally stored first target prediction model may be obtained by pulling and downloading the first target prediction model from a file system corresponding to the information prediction platform to the local when the information prediction platform receives the information prediction request for the first time.
That is to say, in the model loading manner supporting lazy loading in the embodiment of the present disclosure, when an information prediction request is received for the first time, a first target prediction model is pulled and downloaded locally from a file system corresponding to an information prediction platform, and when an information prediction request indicating the same prediction service type is received again, the first target prediction model may be directly loaded locally from the information prediction platform without acquiring the first target prediction model again from the file system corresponding to the information prediction platform, so that the loading efficiency of the first target prediction model providing the information prediction service may be effectively improved.
S403: and if not, loading a first target prediction model corresponding to the first model type from a file system corresponding to the information prediction platform, wherein the file system is used for carrying out persistent storage on the first target prediction model.
In this embodiment, if it is determined that the information prediction platform does not locally exist the first target prediction model corresponding to the first model category, the first target prediction model corresponding to the first model category may be loaded from among the file systems corresponding to the information prediction platform.
And the file system is used for carrying out persistent storage on the first target prediction model.
In this embodiment, because the model loading manner supports lazy loading, when the information prediction request is received for the first time, the first target prediction model is pulled and downloaded locally from the file system corresponding to the information prediction platform, and when the information prediction request indicating the same prediction service type is received again, the first target prediction model can be loaded locally from the information prediction platform without acquiring the first target prediction model again from the file system corresponding to the information prediction platform, and in addition, the first target prediction model is not stored locally in advance, the first target prediction model corresponding to the first model type can be supported to be loaded from the file system corresponding to the information prediction platform, so that the loading efficiency of the first target prediction model providing the information prediction service is effectively improved, and the continuity of the model loading manner is also ensured, the blocking of model loading processing logic is avoided, and loading efficiency and loading effect are improved.
S209: and determining a target prediction micro service corresponding to the prediction service type, wherein the target prediction micro service is obtained by performing micro-service processing on the target prediction service corresponding to the prediction service type in advance.
The embodiment may be described with reference to fig. 5, as shown in fig. 5, fig. 5 is a schematic diagram of a prediction microservice in the embodiment of the present disclosure, and the target prediction microservice may be one of a plurality of prediction microservices, and the prediction microservice corresponding to the prediction service class may be referred to as a target prediction microservice, where the target prediction service corresponding to the prediction service class may be subjected to microservice processing, assist in implementing decoupling of the fusion model, and assist in improving the calling efficiency of the prediction service of the model.
Based on the above illustration of the prediction microservice in fig. 5, in the embodiment of the present disclosure, a multi-tenant access information prediction service platform may be supported, and each tenant may invoke an information prediction service provided by the information prediction platform by invoking a model prediction entry general control service shown in fig. 5.
Where the prediction service may be split into prediction service 1, prediction service 2, … …, prediction service M, then, the prediction service can be divided into prediction micro service 1, prediction micro service 2 … …, prediction micro service N, thereby obtaining corresponding predicted microservice 1, predicted microservice 2, … …, predicted microservice N, the target forecast microservice may be a model file corresponding to a plurality of forecast microservices, or may be divided into a model file 1, a model file 2, and a model file … …, wherein the plurality of forecast microservices and the model file, the method can be deployed on a plurality of different servers, realizes distributed storage aiming at the forecast microservices and the model files, and can effectively avoid the limitation of the hard disk reading speed of a single server, thereby improving the efficiency of model loading to a greater extent.
For example, when the prediction service is split, reference may be made to a relevant theory of software engineering (such as a single responsibility, a closure principle, and a convoy law) and a specific demand of a prediction service scenario, or reference may be made to a size of a model file providing the information prediction service to perform a corresponding splitting operation, which is not limited to this.
The size of the model file is related to the predicted business scenario requirements (predicted business scenario requirements, which may be, for example, time consumption prediction), and the complexity of the model (computational power for loading the model).
For example: predicting business scenario requirements versus current model (ith model M)i) The predicted time-consuming requirements are: predicting time consumption is less than or equal to tiThe unit: second, disk read speed is constant S, unit: m/sec, common diskThe equipment can reach 1000 MB/S, and the speed of the current model for carrying out the structuring processing on the data is SiIn MB/sec, the maximum value of the model size (i) is less than or equal to ti*(S+Si) Max (size (i) ≦ ti*(S+Si)。
It should be noted that, in the actual calculation, load factors such as a disk and a network may also be considered.
Taking the information prediction platform as an intelligent dialogue platform as an example, the information prediction platform takes time for predicting the intelligent dialogue prediction service, usually within 100 ms, wherein the prediction model can be divided into: a configuration class, a complex rule class, a depth model class (of course, the configuration class, the complex rule class, the depth model class, or any other possible model class).
Assuming that the target time consumption upper limit of the prediction service corresponding to the models of the configuration class, the complex rule class and the depth model class is 100 milliseconds during prediction, then:
1) the model of the configuration class mainly comprises files with relatively small data size (hundred KB magnitude), the model is short in time consumption (millisecond level) in loading and is also suitable for frequent network transmission (millisecond level), and the prediction service can be acquired from the model storage middleware after receiving a prediction request every time;
2) the model of the complex rule class is composed of files with relatively large data size (ten MB magnitude), the model is relatively short in time consumption (dozens of milliseconds), is not suitable for frequent network transmission (dozens of milliseconds), and the prediction service is not suitable for frequent loading;
3) the model of the deep model class is composed of files with relatively large data size (hundred MB level), the loading of the model is long in time (second level), the model is not suitable for frequent network transmission (hundred millisecond level), and the prediction service is not suitable for frequent loading.
Aiming at the prediction model of the complex rule class, the embodiment of the disclosure can support the loading of the model by using a lazy loading technology, thereby effectively reducing the memory occupation of the prediction service in the multi-tenant mode, and the lazy loading is performed.
In some other embodiments, the data format of the target prediction model may be configured to be a binary data format in the model training stage, for example, training data or other related data involved in the training process of the target prediction model may be serialized into a binary data format by using a serialization technology, so that the data amount of the model may be effectively reduced, thereby assisting in improving the efficiency of model loading and meeting the requirements of information prediction scenarios.
In some other embodiments, slow offloading may also be supported, and by monitoring the information prediction request, in a case that a certain condition is satisfied (for example, the information prediction request is not received within a set time range), the model corresponding to the information prediction request may be offloaded to release the memory of the server.
S210: and executing the target prediction micro service to predict result information according to the information prediction request.
After the target prediction model is loaded and the target prediction micro service corresponding to the prediction service type is determined, the target prediction micro service can be executed to obtain result information according to the information prediction request.
The embodiment may be described in detail with reference to fig. 5, and as shown in fig. 5, different model files may be loaded from different servers by executing different target prediction microservices according to an information prediction request, so as to implement an information prediction process.
Therefore, in the embodiment, after the target prediction model is loaded, the target prediction micro service corresponding to the prediction service type is determined, and the target prediction micro service is obtained by performing micro-service processing on the target prediction service corresponding to the prediction service type in advance; and executing the target prediction microservice to predict the result information according to the information prediction request. Therefore, distributed storage for the forecast microservices and the model files is achieved, limitation of the hard disk reading speed of a single server can be effectively avoided, and the model loading efficiency is improved to a large extent.
In the embodiment, an information prediction request is received, and the type of information to be predicted corresponding to the information prediction request is determined; determining candidate prediction information categories corresponding to the information categories to be predicted from the pre-configured corresponding relations; determining at least one candidate prediction service category corresponding to the candidate prediction information category, and taking the at least one candidate prediction service category as a corresponding prediction service category; the prediction service type corresponding to the information prediction request is determined according to the pre-configured corresponding relation, so that the boundary of the prediction service related to the information prediction request can be rapidly and accurately determined, a prediction model adapted to the prediction service is assisted to be rapidly called subsequently, the identification efficiency of the prediction service type is improved to a great extent, and the response efficiency of information prediction is improved. The model loading mode corresponding to the prediction service type is determined, the target prediction model can be loaded from the information prediction platform local or the model storage middleware by determining the model type corresponding to the prediction service type and based on different model types, so that the adaptive model loading mode can be configured according to the model types providing different information prediction services, the flexibility of model loading is improved, the phenomenon that more memory resources are occupied due to premature model loading is avoided, the model loading mode is more adaptive to the actual prediction scene requirements, and the response efficiency of information prediction is assisted to be improved. After the target prediction model is loaded, determining target prediction micro-services corresponding to the prediction service types, wherein the target prediction micro-services are obtained by performing micro-service processing on the target prediction services corresponding to the prediction service types in advance; and executing the target prediction micro service to predict result information according to the information prediction request. Therefore, distributed storage for the forecast microservices and the model files is achieved, limitation of the hard disk reading speed of a single server can be effectively avoided, and the model loading efficiency is improved to a large extent.
Fig. 6 is a schematic diagram according to a fourth embodiment of the present disclosure.
As shown in fig. 6, the information prediction apparatus 60 includes:
a receiving module 601, configured to receive an information prediction request;
a first determining module 602, configured to determine a predicted service class corresponding to the information prediction request;
a second determining module 603, configured to determine a model loading manner corresponding to the predicted service class; and
the loading module 604 is configured to load the target prediction model based on a model loading manner, where the target prediction model executes a target prediction service corresponding to the prediction service category after loading, so as to obtain result information according to the information prediction request.
In some embodiments of the present disclosure, as shown in fig. 7, fig. 7 is a schematic diagram according to a fifth embodiment of the present disclosure, and the information prediction apparatus 70 includes: the device comprises a receiving module 701, a first determining module 702, a second determining module 703 and a loading module 704, wherein the loading module 704 comprises:
a determining submodule 7041 configured to determine a model class corresponding to the predicted service class;
the first loading submodule 7042 is configured to, when the model type is the first model type, locally load the first target prediction model from the information prediction platform, where the information prediction platform is configured to obtain result information according to the prediction of the information prediction request;
the second loading sub-module 7043 is configured to, when the model type is the second model type, load the second target prediction model from the model storage middleware, where the model storage middleware is configured to store the trained second target prediction model, and the first target prediction model is different from the second target prediction model.
In some embodiments of the present disclosure, among others, first load submodule 7042 is specifically configured to:
determining whether a first target prediction model corresponding to a first model category exists locally on the information prediction platform;
if yes, locally loading a first target prediction model from the information prediction platform;
and if not, loading a first target prediction model corresponding to the first model type from a file system corresponding to the information prediction platform, wherein the file system is used for carrying out persistent storage on the first target prediction model.
In some embodiments of the present disclosure, further comprising:
a third determining module 705, configured to determine a target predicted micro-service corresponding to the predicted service category, where the target predicted micro-service is obtained by performing micro-service processing on the target predicted service corresponding to the predicted service category in advance;
and the execution module 706 is configured to execute the target prediction microservice to obtain result information according to the information prediction request.
In some embodiments of the present disclosure, the first resource consumption value corresponding to the first target prediction model is greater than the second resource consumption value corresponding to the second target prediction model.
In some embodiments of the present disclosure, the first determining module 702 is specifically configured to:
determining the type of information to be predicted corresponding to the information prediction request;
determining candidate prediction information categories corresponding to the information categories to be predicted from the pre-configured corresponding relations;
determining at least one candidate prediction service category corresponding to the candidate prediction information category, and taking the at least one candidate prediction service category as a corresponding prediction service category;
wherein, the corresponding relation includes: a candidate prediction information category, and at least one candidate prediction service category corresponding to the candidate prediction information category.
In some embodiments of the present disclosure, wherein the data format of the target prediction model is a binary data format or a text format.
It is understood that the information prediction apparatus 70 in fig. 7 of the present embodiment and the information prediction apparatus 60 in the foregoing embodiment, the receiving module 701 and the receiving module 601 in the foregoing embodiment, the first determining module 702 and the first determining module 602 in the foregoing embodiment, the second determining module 703 and the second determining module 603 in the foregoing embodiment, and the loading module 704 and the loading module 604 in the foregoing embodiment may have the same functions and structures.
It should be noted that the explanation of the information prediction method is also applicable to the information prediction apparatus of the present embodiment, and is not repeated here.
In the embodiment, the information prediction request is received; determining a predicted service class corresponding to the information prediction request; determining a model loading mode corresponding to the prediction service type; and loading the target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service category after loading so as to obtain result information according to the prediction request. In an application scenario where prediction requests are not frequent, compared with a technical scheme of preloading all models by a single prediction service, the embodiment of the disclosure can greatly reduce resource consumption under the condition that the average response time is slightly prolonged. And the complex logic is decoupled, the complexity of single service information prediction is reduced, and the research and development efficiency is effectively improved.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 shows a schematic block diagram of an example electronic device that may be used to implement the information prediction methods of embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 807 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806 such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 801 executes the respective methods and processes described above, such as an information prediction method. For example, in some embodiments, the information prediction method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 807. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more of the steps of the information prediction methods described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the information prediction method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable information prediction apparatus, such that the program codes, when executed by the processor or controller, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (12)

1. An information prediction method, comprising:
receiving an information prediction request;
determining a predicted service class corresponding to the information prediction request;
determining a model loading mode corresponding to the prediction service type; and
loading a target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service category after loading so as to obtain result information according to the information prediction request;
wherein the loading of the target prediction model based on the model loading manner includes:
determining a model class corresponding to the predicted service class;
if the model type is a first model type, locally loading a first target prediction model from an information prediction platform, wherein the information prediction platform is used for predicting to obtain result information according to the information prediction request;
if the model type is a second model type, loading a second target prediction model from model storage middleware, wherein the model storage middleware is used for storing the trained second target prediction model, and the first target prediction model is different from the second prediction model;
a first resource consumption value corresponding to the first target prediction model is larger than a second resource consumption value corresponding to the second target prediction model;
the locally loading a first target prediction model from an information prediction platform comprises: when an information prediction request is received for the first time, the first target prediction model is pulled and downloaded to the local from a file system corresponding to the information prediction platform.
2. The method of claim 1, wherein said locally loading a first target prediction model from an information prediction platform if the model class is a first model class comprises:
determining whether a first target prediction model corresponding to the first model category exists locally on the information prediction platform;
if yes, locally loading the first target prediction model from an information prediction platform;
and if not, loading a first target prediction model corresponding to the first model type from a file system corresponding to the information prediction platform, wherein the file system is used for carrying out persistent storage on the first target prediction model.
3. The method of claim 1, further comprising:
determining a target prediction micro-service corresponding to the prediction service type, wherein the target prediction micro-service is obtained by performing micro-service processing on the target prediction service corresponding to the prediction service type in advance;
and executing the target prediction micro service to obtain result information according to the information prediction request.
4. The method of claim 1, wherein the determining a predicted class of service corresponding to the information prediction request comprises:
determining the type of information to be predicted corresponding to the information prediction request;
determining candidate prediction information categories corresponding to the information categories to be predicted from the pre-configured corresponding relations;
determining at least one candidate prediction service class corresponding to the candidate prediction information class, and taking the at least one candidate prediction service class as the corresponding prediction service class;
wherein the corresponding relationship comprises: the candidate prediction information category, and at least one candidate prediction service category corresponding to the candidate prediction information category.
5. The method according to any one of claims 1-4, wherein the data format of the object prediction model is a binary data format or a text format.
6. An information prediction apparatus comprising:
the receiving module is used for receiving an information prediction request;
a first determining module, configured to determine a predicted service class corresponding to the information prediction request;
the second determining module is used for determining a model loading mode corresponding to the prediction service type; and
the loading module is used for loading a target prediction model based on the model loading mode, wherein the target prediction model executes target prediction service corresponding to the prediction service type after being loaded so as to obtain result information according to the information prediction request;
wherein, the loading module comprises:
a determining submodule for determining a model class corresponding to the predicted service class;
the first loading submodule is used for locally loading a first target prediction model from an information prediction platform when the model type is a first model type, and the information prediction platform is used for predicting to obtain result information according to the information prediction request;
a second loading sub-module, configured to load a second target prediction model from a model storage middleware when the model type is a second model type, where the model storage middleware is configured to store the trained second target prediction model, and the first target prediction model is different from the second prediction model;
a first resource consumption value corresponding to the first target prediction model is larger than a second resource consumption value corresponding to the second target prediction model;
the locally loading a first target prediction model from an information prediction platform comprises: when an information prediction request is received for the first time, the first target prediction model is pulled and downloaded to the local from a file system corresponding to the information prediction platform.
7. The apparatus of claim 6, wherein the first load submodule is specifically configured to:
determining whether a first target prediction model corresponding to the first model category exists locally on the information prediction platform;
if yes, locally loading the first target prediction model from an information prediction platform;
and if not, loading a first target prediction model corresponding to the first model type from a file system corresponding to the information prediction platform, wherein the file system is used for carrying out persistent storage on the first target prediction model.
8. The apparatus of claim 6, further comprising:
a third determining module, configured to determine a target predicted micro-service corresponding to the predicted service category, where the target predicted micro-service is obtained by performing micro-service processing on the target predicted service corresponding to the predicted service category in advance;
and the execution module is used for executing the target prediction micro service so as to obtain result information according to the information prediction request.
9. The apparatus of claim 6, wherein the first determining module is specifically configured to:
determining the type of information to be predicted corresponding to the information prediction request;
determining candidate prediction information categories corresponding to the information categories to be predicted from the pre-configured corresponding relations;
determining at least one candidate prediction service class corresponding to the candidate prediction information class, and taking the at least one candidate prediction service class as the corresponding prediction service class;
wherein the corresponding relationship comprises: the candidate prediction information category, and at least one candidate prediction service category corresponding to the candidate prediction information category.
10. The apparatus according to any one of claims 6-9, wherein the data format of the object prediction model is a binary data format or a text format.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202110738469.5A 2021-06-30 2021-06-30 Information prediction method, information prediction device, electronic equipment and storage medium Active CN113554180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110738469.5A CN113554180B (en) 2021-06-30 2021-06-30 Information prediction method, information prediction device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110738469.5A CN113554180B (en) 2021-06-30 2021-06-30 Information prediction method, information prediction device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113554180A CN113554180A (en) 2021-10-26
CN113554180B true CN113554180B (en) 2022-05-31

Family

ID=78131161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110738469.5A Active CN113554180B (en) 2021-06-30 2021-06-30 Information prediction method, information prediction device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113554180B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443896B (en) * 2022-01-25 2023-09-15 百度在线网络技术(北京)有限公司 Data processing method and method for training predictive model
CN116321244B (en) * 2023-02-01 2023-12-15 广州爱浦路网络技术有限公司 Method for setting timeliness of detailed information of N3IWFs/TNGFs, computer apparatus and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030200134A1 (en) * 2002-03-29 2003-10-23 Leonard Michael James System and method for large-scale automatic forecasting
US8370280B1 (en) * 2011-07-14 2013-02-05 Google Inc. Combining predictive models in predictive analytical modeling
GB2541625A (en) * 2014-05-23 2017-02-22 Datarobot Systems and techniques for predictive data analytics
US20160171590A1 (en) * 2014-11-10 2016-06-16 0934781 B.C. Ltd Push-based category recommendations
US10855561B2 (en) * 2016-04-14 2020-12-01 Oracle International Corporation Predictive service request system and methods
KR20180117800A (en) * 2017-04-20 2018-10-30 주식회사 비아이큐브 Method for providing asset portfolio recommendating service

Also Published As

Publication number Publication date
CN113554180A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN111523640B (en) Training method and device for neural network model
CN113554180B (en) Information prediction method, information prediction device, electronic equipment and storage medium
CN113361578B (en) Training method and device for image processing model, electronic equipment and storage medium
US20220101199A1 (en) Point-of-interest recommendation
CN112527281B (en) Operator upgrading method and device based on artificial intelligence, electronic equipment and medium
CN113627536A (en) Model training method, video classification method, device, equipment and storage medium
CN114494815B (en) Neural network training method, target detection method, device, equipment and medium
CN114494776A (en) Model training method, device, equipment and storage medium
CN114186681A (en) Method, apparatus and computer program product for generating model clusters
CN114449343A (en) Video processing method, device, equipment and storage medium
CN114360027A (en) Training method and device for feature extraction network and electronic equipment
CN114239853A (en) Model training method, device, equipment, storage medium and program product
CN113904943A (en) Account detection method and device, electronic equipment and storage medium
CN113378855A (en) Method for processing multitask, related device and computer program product
CN113704256B (en) Data identification method, device, electronic equipment and storage medium
CN114998649A (en) Training method of image classification model, and image classification method and device
CN114445668A (en) Image recognition method and device, electronic equipment and storage medium
CN113361574A (en) Training method and device of data processing model, electronic equipment and storage medium
CN113886543A (en) Method, apparatus, medium, and program product for generating an intent recognition model
CN113220367A (en) Applet running method and device, electronic equipment and storage medium
CN114386577A (en) Method, apparatus, and storage medium for executing deep learning model
CN113963011A (en) Image recognition method and device, electronic equipment and storage medium
CN113469732A (en) Content understanding-based auditing method and device and electronic equipment
CN113554062A (en) Training method, device and storage medium of multi-classification model
CN113361621A (en) Method and apparatus for training a model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant