CN116225424A - Universal model effect display method, device, equipment and storage medium - Google Patents

Universal model effect display method, device, equipment and storage medium Download PDF

Info

Publication number
CN116225424A
CN116225424A CN202310185117.0A CN202310185117A CN116225424A CN 116225424 A CN116225424 A CN 116225424A CN 202310185117 A CN202310185117 A CN 202310185117A CN 116225424 A CN116225424 A CN 116225424A
Authority
CN
China
Prior art keywords
model
interface
effect display
request
models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310185117.0A
Other languages
Chinese (zh)
Inventor
闫光远
代久龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310185117.0A priority Critical patent/CN116225424A/en
Publication of CN116225424A publication Critical patent/CN116225424A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

The disclosure provides a general model effect display method, device, equipment and storage medium, relates to the technical field of computers, and particularly relates to the technical field of artificial intelligence. The implementation scheme is as follows: firstly, creating a model effect display service for displaying model effects in an interface of a client, and then acquiring interface parameters of a plurality of classes of artificial intelligent AI models so as to call the corresponding AI models through the interface parameters of the AI models in the subsequent steps; further, aiming at the AI model of each category, configuring the model effect display service according to the interface parameters of the AI model so as to obtain a target model effect display service corresponding to the AI model of each category; and displaying the model effect of the AI model of the corresponding category in the interface of the client based on the content to be identified input by the user through the target model effect display service.

Description

Universal model effect display method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical field of artificial intelligence, and specifically relates to a general model effect display method, device, equipment and storage medium.
Background
With the rapid development of artificial intelligence (Artificial Intelligence, AI) technology, the variety and quantity of products of AI models are also continuously enriched and growing. Massive AI models can be landed in specific artificial intelligence application scenes only by popularization and selling, and finally the value of the AI models is exerted. And the popularization and selling of the AI model can intuitively display the functions and advantages of the AI model through visual display effects.
Disclosure of Invention
The disclosure provides a general model effect display method, device, equipment and storage medium.
According to a first aspect of the present disclosure, there is provided a general model effect display method, including: creating a model effect display service, wherein the model effect display service is used for displaying model effects in an interface of a client; acquiring interface parameters of a plurality of classes of artificial intelligence AI models, wherein the interface parameters of the AI models are used for calling the AI models; configuring a model effect display service according to interface parameters of the AI model aiming at the AI model of each category to obtain a target model effect display service corresponding to the AI model of each category; the target model effect display service is used for displaying model effects of AI models of corresponding categories in an interface of the client based on the content to be identified input by the user.
According to a second aspect of the present disclosure, there is provided a general model effect display device, comprising: the creating unit is used for creating a model effect display service, wherein the model effect display service is used for displaying model effects in an interface of the client; the system comprises an acquisition unit, a calculation unit and a calculation unit, wherein the acquisition unit is used for acquiring interface parameters of a plurality of classes of artificial intelligence AI models, and the interface parameters of the AI models are used for calling the AI models; the processing unit is used for configuring the model effect display service according to the interface parameters of the AI model aiming at the AI model of each category so as to obtain a target model effect display service corresponding to the AI model of each category; the target model effect display service is used for displaying model effects of AI models of corresponding categories in an interface of the client based on the content to be identified input by the user.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions, comprising:
the computer instructions are for causing a computer to perform any of the methods of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising:
a computer program which, when executed by a processor, performs any of the methods of the first aspect.
According to the technology disclosed by the invention, the problems of large development workload, low development efficiency and higher labor cost of developing front-end display interfaces corresponding to different types of AI models are solved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flow chart of a general model effect display method according to an embodiment of the disclosure;
FIG. 2 is a diagram of an example of interface presentation for a client provided in an embodiment of the present disclosure;
FIG. 3 is a diagram of another example of an interface presentation of a client provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of another general model effect display method provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of yet another general model effect display method provided by embodiments of the present disclosure;
FIG. 6 is a flow diagram of yet another general model effect display method provided by embodiments of the present disclosure;
fig. 7 is a schematic implementation logic diagram corresponding to a general model effect display method according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a general model effect display device according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an electronic device of a general model effect display method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
Before describing the general model effect display method in detail, an application scenario of the embodiment of the disclosure is described.
First, an application scenario of the embodiment of the present disclosure will be described.
The AI model can be landed in a specific artificial intelligence application scene only through popularization and selling, and finally the value of the AI model is exerted. And the popularization and selling of the AI model require visual display effects to intuitively display functions and effects of the AI model.
The AI models are various, and comprise various types of text recognition models, voice recognition models, video recognition models and the like, and the formats of service interface request bodies and return bodies of the AI models of different types are not uniform, wherein the names, the number and the display modes of dynamic parameters are not uniform. This results in the inability to use a front-end presentation interface to present the effects of the various classes of AI models.
The existing display scheme of the AI model identification result is to respectively perform customized development work according to AI models of different categories to adapt to the personalized interfaces of the AI models, finally develop and obtain the visual display interfaces corresponding to the AI models of each category, and complete the display of the model effect. Specifically, the personalized features corresponding to the interfaces of the AI models of each category need to be configured, so that the visual display interface corresponding to the AI models of each category can be developed.
The individualization feature (referred to as an interface parameter in the embodiment of the present disclosure) corresponding to the interface of the AI model includes: the model interface requests address, authentication information, request body format type, return body format type, request body template, return body template, request body dynamic parameters, return body dynamic parameters, etc.
Because the existing scheme needs to develop interface adaptation work aiming at specific AI models of each category, the development workload is high, and the labor cost is high. Moreover, the existing scheme not only needs to carry out customized development work, but also needs to carry out testing, verification and deployment, and is long in time consumption, so that the model effect showing function is slow in online and low in efficiency. In addition, if the AI model is iteratively upgraded, interface parameters are changed, interface errors can be directly displayed at the front end, the identification result of the AI model can not be normally displayed, and the scheme is inflexible and poor in adaptability.
In order to solve the above-mentioned problems, the embodiments of the present disclosure provide a general model effect display method, which is applied to an application scenario for displaying model effects of AI models of different categories. In the method, firstly, a model effect display service for displaying model effects in an interface of a client is created, and then interface parameters of AI models of a plurality of categories are acquired so as to call corresponding AI models through the interface parameters of the AI models in a subsequent step; further, aiming at the AI model of each category, configuring the model effect display service according to the interface parameters of the AI model so as to obtain a target model effect display service corresponding to the AI model of each category; and displaying the model effect of the AI model of the corresponding category in the interface of the client based on the content to be identified input by the user through the target model effect display service.
It can be appreciated that the present disclosure may configure a model effect display service created in advance based on interface parameters of AI models of a plurality of categories, to obtain a target model effect display service corresponding to each AI model of each category, so as to display, by means of the target model effect display service corresponding to each AI model of each category, a model effect of the AI model of the corresponding category in an interface of a client based on content to be identified input by a user. Therefore, when the user inputs the content to be identified in the interface of the client, the service display model effect can be displayed through the target model effect corresponding to the AI model of the corresponding category. By the method, the model effect display service which is created in advance can be configured according to the interface parameters of the AI models of different types aiming at the AI models of different types, and the target model effect display service corresponding to the AI model of each type is generated. And the personalized service interface of each type of AI model is adapted without respectively carrying out customized development work according to different AI models so as to develop and obtain the exclusive front end display interface corresponding to each AI model. Therefore, the efficiency of developing and obtaining the special model effect display service corresponding to the AI models of different types can be improved, and the development workload is reduced.
The execution subject of the general model effect display method provided by the present disclosure may be a general model effect display device, and the execution device may be a server. The execution means may also be a central processing unit (Central Processing Unit, CPU) of the server, or a processing module in the server for generating a target model effect presentation service. In the embodiment of the present disclosure, a general model effect display method executed by a server is taken as an example, and the general model effect display method provided in the embodiment of the present disclosure is described.
It should be noted that, the embodiments of the present disclosure do not limit the server. The server in the embodiments of the present disclosure may be an independent physical server, or a server cluster or a distributed file system formed by a plurality of physical servers, or at least one of cloud servers that provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks, and basic cloud computing services such as big data or an artificial intelligence platform, which is not limited in the embodiments of the present disclosure. In addition, the client in the embodiments of the present disclosure may be installed in an electronic device, which may be a tablet computer, a mobile phone, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, or the like, and the specific form of the electronic device is not particularly limited in the embodiments of the present disclosure.
The general model effect display method provided by the embodiment of the disclosure specifically comprises two stages: the first stage is a stage of configuring the effect display service of the corresponding target model of each type of AI model; the second phase is a phase showing the effect of the AI model. The scheme provided by this embodiment will be described below through these two stages.
As shown in fig. 1, a general model effect display method provided for an embodiment of the present disclosure specifically includes a stage of configuring a target model effect display service corresponding to each type of AI model, that is, corresponds to the first stage, and the method may include:
s101, creating a model effect display service.
The model effect display service is used for displaying the model effect in an interface of the client.
In the embodiment of the present disclosure, when a target model effect display service (or referred to as a proprietary model effect display service) corresponding to a plurality of types of AI models needs to be developed, a generic model effect display service may be created in advance, so that the generic model effect display service created in advance is configured based on interface parameters of AI models of each type of AI models of the plurality of types, so as to obtain the target model effect display service corresponding to the AI model of each type.
Alternatively, the generic model effect presentation service is a pre-built base service.
It can be understood that the pre-created model effect display service is a general service, and a developer can obtain an exclusive model effect display service by further configuring some exclusive parameters, so that development efficiency is improved, and development workload is reduced.
S102, acquiring interface parameters of AI models of a plurality of categories.
Wherein the interface parameters of the AI model are used to invoke the AI model.
In one possible implementation manner, the interface parameters corresponding to the AI models of each category may be directly obtained from the electronic device storing the AI models of a plurality of categories, where the electronic devices storing the interface parameters of the AI models of different categories may be the same or different. In another possible implementation manner, interface parameters corresponding to AI models of multiple classes may also be stored locally in advance. The method for acquiring the interface parameters corresponding to the AI model is not particularly limited in the present disclosure.
Alternatively, the AI models of the plurality of categories may include: image recognition models, text recognition models, speech recognition models, etc., each class of AI models may include a plurality of different models.
In one possible implementation, the interface parameters include at least one of: the model interface requests an address, authentication information, a request body format type, a return body format type, a request body template, a return body template, a request body dynamic parameter, and a return body dynamic parameter.
The model interface request address is used for calling a corresponding AI model, and the authentication information is used for authenticating a request message generated based on the target model effect display service; the request style, the request template and the request dynamic parameters are used for generating a request message; the returned body format type, the returned body template and the returned body dynamic parameters are used for determining the identification result corresponding to the content to be identified from the response message.
S103, configuring the model effect display service according to the interface parameters of the AI model aiming at the AI model of each category so as to obtain the target model effect display service corresponding to the AI model of each category.
The target model effect display service is used for displaying model effects of AI models of corresponding categories in an interface of the client based on the content to be identified input by the user.
In one possible implementation manner, after the target model effect display service corresponding to each type of AI model is obtained, the target model effect display service corresponding to each type of AI model may be issued online, so that a user may use the required target model effect display service to view the model effect of the corresponding AI model through the required target model effect display service.
Wherein the interfaces that demonstrate the model effects of the different classes of AI models (i.e., the interfaces of the clients described above) may be different.
Exemplary, as shown in fig. 2, a client presentation interface corresponding to the text recognition model includes: text input area, recognition result display area, initiate recognition request button, format type, upper limit character number, etc.
As shown in fig. 3, the client presentation interface corresponding to the image recognition model includes: an image input area, a recognition result display area, a start recognition request button (start analysis control), an image address input area, and the like.
In one possible implementation, before displaying the client presentation interface corresponding to the AI model of any of the categories, a model category selection interface may be displayed, and the user may select the AI model of the desired category in the model category selection interface, thereby triggering the display of the corresponding interface (e.g., displaying the interface shown in fig. 2 or the interface shown in fig. 3).
As shown in fig. 4, another general model effect display method provided in an embodiment of the present disclosure specifically includes a stage of configuring a target model effect display service corresponding to each type of AI model, that is, corresponding to the first stage, the method may include:
S401, creating a model effect display service.
S402, acquiring interface parameters of AI models of a plurality of categories.
Note that, the specific description of S401 is the same as the specific description of S101, and the specific description of S402 is the same as S102, and will not be repeated here.
S403, acquiring configuration information of interface parameters of the AI model for each type of AI model.
S404, configuring configuration information of the interface parameters into a model effect display service to obtain a target model effect display service corresponding to the AI model of each category.
Optionally, the operation and maintenance personnel can select or input interface parameters corresponding to the AI models of different types in the configuration interface of the model effect display service, so that the server obtains the interface parameters, and obtains corresponding configuration information according to the obtained interface parameters, thereby enabling the server to configure the configuration information of the corresponding interface parameters into the model effect display service.
For example, the model effect display service may be understood as a section of code (for example, referred to as code 1), and the configuration information may be understood as a section of code (for example, referred to as code 2, where the code 2 includes interface parameters), when the operator selects or inputs the interface parameters corresponding to the AI model in the configuration interface of the model effect display service, the server may obtain the corresponding interface parameters, obtain the configuration information of the interface parameters based on the obtained interface parameters, and configure the configuration information into the model effect display service. Wherein, configuring the configuration information to the model effect exhibition service can be understood as: the server nests (or modifies) code 2 (or the interface parameters in code 2) into the corresponding parameters of code 1 to establish the relationship of the two codes.
Specifically, for AI models of different types, after determining the model type (such as text recognition model or image recognition model) to be configured, the operation and maintenance personnel can determine the style of the corresponding client display interface, and further configure configuration information of the corresponding interface parameters in the configuration interface of the model effect display service.
Specifically, the operator may input an interface request address corresponding to the AI model in a corresponding area in the configuration interface of the model effect display service. Then, the server may obtain the interface request address, and obtain configuration information of the interface request address based on the obtained interface request address. The server may configure configuration information of an interface request address corresponding to the AI model into the model effect display service.
It should be noted that, the interface request address corresponding to the AI model is used for calling the AI model, and the interface request address corresponding to the AI model may be an address of a server storing the AI model; the memory address of the AI model may also be stored locally.
The interface request address corresponding to the AI model may be a link (e.g., http:// -). The operator may input http:////... The server may then obtain the interface request address: http:///... The server may configure the configuration information of the obtained interface request address into the model effect display service.
After the configuration information of the interface request address corresponding to the AI model is configured in the pre-created model effect display service to obtain the target model effect display service, the AI model can be invoked based on the configuration information of the interface request address in the target model effect display service.
Specifically, the operation and maintenance personnel can also input authentication information corresponding to the AI model in a corresponding area in the configuration interface of the model effect display service. The server may obtain the authentication information. Based on the obtained authentication information, the server may obtain configuration information of the authentication information. And then, the server can configure the configuration information of the authentication information corresponding to the AI model into the model effect display service.
The authentication information corresponding to the AI model can be obtained from the AI model, and the authentication information is used for guaranteeing the security of the AI model call.
In one possible implementation, each AI model may be pre-configured with a unique authentication information.
For example, the authentication information corresponding to the AI model may be verification code information.
The configuration information of the authentication information is used to add the authentication information in a request message (which may also be referred to as a call request) requesting the invocation of the AI model. As an example, the configuration information of the authentication information may be used to add an Authorization field to the request message, such as in a request header of the request message, and populate the authentication information (such as verification code information) in the field when the request message is generated. Therefore, when the AI model is called based on the target model effect display service, the AI model can be successfully called only after the authentication information passes the authentication, so that the interface security of the AI model is ensured.
That is, the target model effect presentation service is allowed to call the AI model only when authentication information included in the request header of the request message is correct; when the authentication information included in the request header of the request message is incorrect or the authentication information not included in the request header of the request message, the target model effect presentation service is not allowed to call the AI model.
Specifically, the operation and maintenance personnel can configure the request format type and the return format type in the corresponding area in the configuration interface of the model effect display service. Then, the server may obtain the request format type and the return format type, and obtain corresponding configuration information based on the obtained request format type and the return format type. The server may configure configuration information of the requested style and the returned style into the model effect presentation service. Subsequently, the operation and maintenance personnel can configure corresponding request body templates and return body templates according to the request body format types and the return body format types.
The format type of the request body is consistent with the format type of the return body, and the format type is any one of the following: JSON, text, XML.
For example, in the configuration interface of the model effect presentation service, any one of a plurality of format types such as JSON, text, XML may be selected in the drop-down option to implement configuration of the request-body format type and the return-body format type.
In one possible implementation, the request-body format type may be determined according to the content to be identified, and the return-body format type may be determined according to the request-body format type, e.g., both remain identical.
The operation and maintenance personnel can also configure the request body template and the return body template according to the request body format type and the return body format type. The request body template is consistent with the request body format type, and the return body template is consistent with the return body format type.
Illustratively, taking the format type JSON as an example, the request body template configured by the operation and maintenance personnel may be:
Figure BDA0004104111530000101
also exemplary, taking the format type JSON as an example, the configured return body template may be:
Figure BDA0004104111530000102
after the operation and maintenance personnel configure the request body template, the server can obtain the request body template and obtain the configuration information of the request body template based on the obtained request body template. The server may configure configuration information of the request body template corresponding to the AI model into the model effect display service. Similarly, after the operation and maintenance personnel configures the return body template, the server may obtain the return body template, and obtain configuration information of the return body template based on the obtained return body template. The server may configure the configuration information of the returned body template corresponding to the AI model into the model effect display service.
Specifically, the operation and maintenance personnel can configure the request body dynamic parameters and the return body dynamic parameters in the corresponding areas in the configuration interface of the model effect display service. The request body dynamic parameters are used for filling the content to be identified into the request body template, and the return body dynamic parameters are used for determining the identification result corresponding to the content to be identified from the response message based on the return body template.
After the operation and maintenance personnel configures the dynamic parameters of the request body, the server can obtain the dynamic parameters of the request body, and obtain the configuration information of the dynamic parameters of the request body based on the obtained dynamic parameters of the request body. The server may configure configuration information of the dynamic parameters of the request body corresponding to the AI model into the model effect display service. Similarly, after the operation and maintenance personnel configures the returned body dynamic parameter, the server may obtain the returned body dynamic parameter, and obtain configuration information of the returned body dynamic parameter based on the obtained returned body dynamic parameter. The server may configure configuration information of the returned body dynamic parameters corresponding to the AI model into the model effect display service. The configuration information of the dynamic parameters of the request body is used for indicating the multi-layer nesting relationship of the request body, and the configuration information of the dynamic parameters of the return body is used for indicating the multi-layer nesting relationship of the return body.
The dynamic parameter of the request body may refer to a field (or fields) of an input item in an interface of the client corresponding to the AI model, for example, a field (or fields) that needs to be filled in a name field in the request body template in the above example, and this field (or fields) needs to be manually input by a user in the interface of the client. The returned body dynamic parameter may refer to a field of the displayed recognition result in the interface of the client corresponding to the AI model, for example, a certain field or a plurality of fields on the right side in the interface shown in fig. 2 or fig. 3. These recognition results are included in fields corresponding to the returned volumes, such as the shortName field in the returned volume template of the example described above.
Illustratively, taking a request body dynamic parameter as a field to be filled in a name field in a request body template as an example, the request body dynamic parameter may be text content input in an interface of a client corresponding to the AI model. The field of the dynamic parameter of the configuration request body can be configured by a mode of' requestbody. That is, the configuration information of the request dynamic parameter may be a requestBody, which indicates a multi-layer nesting relationship corresponding to the request body, specifically, a name field in the requestBody. Therefore, after the user manually inputs characters on the interface of the client, based on the configuration information configured in the target model effect display service, the characters manually input by the user can be filled into the request body template to serve as the value of the name field, and the value is carried when the corresponding AI model is called.
For example, when the text input by the user in the client interface is "health science and technology limited", based on the configuration information requestbody name configured in the target model effect display service, the server may nest the text in the name field in the request body template to obtain the corresponding request body, where the obtained request body is as follows:
Figure BDA0004104111530000121
also, for example, the field of the dynamic parameter of the return body may be configured in a manner of "result. Name: enterprise abbreviation", that is, the configuration information of the dynamic parameter of the return body may be result. Short name: enterprise abbreviation, where the configuration information indicates a multi-layer nesting relationship of the corresponding return body, specifically: the shortname field in result. Wherein the text following the colon represents the content of this field presented in the client's interface.
In the embodiment of the disclosure, when the model effect display service is configured, the model effect display service can be configured according to the configuration information of the interface parameters of any type of AI model, so that the target model effect display service corresponding to any type of AI model can be obtained. Namely, configuration information of interface parameters of AI models of different types is configured in the model effect display service, and the target model effect display service corresponding to the AI models of different types can be obtained. Therefore, personalized parameter configuration can be carried out on the model effect display service which is created in advance, the required target model effect display service is obtained, and the efficiency of constructing the model effect display service is improved.
As shown in fig. 5, a general model effect display method provided in an embodiment of the present disclosure specifically includes a stage of displaying an AI model effect, that is, corresponds to the second stage, and the method may include:
s501, acquiring content to be identified which is input in a first interface of a client by a user.
Wherein the first interface corresponds to a first class of AI models, the first class of AI models being included in the plurality of classes of AI models. For example, the first interface may be the interface shown in fig. 2 or fig. 3.
Optionally, the first interface of the client may include at least one of the following display contents: a content input area to be identified, an identification result display area, an identification request initiating button, and the like.
Alternatively, the content to be identified may be any one of the following: text content, image content, voice content, etc. The specific text content may be a character string directly input in the input box, the image content may be a picture directly input in the input box, or may be a link (memory address) of an image, etc., and the voice content may be voice directly input in the input box, or may be a link (memory address) of voice, etc.
The content to be identified in different categories obtains the identification result based on the AI models in the corresponding categories, and the interfaces of the clients displaying the AI models in the different categories can be different.
In one possible implementation, after the target model effect display service corresponding to each AI model of each category is released and online, the user may use the target model effect display service corresponding to any AI model of that category to obtain the display effect of that AI model of that category. For example, the user may input the content to be identified in an interface, such as a first interface, of the client corresponding to the AI model of the category. After the user inputs the content to be identified in the first interface, for example, the user inputs the content to be identified in the first interface and triggers the identification initiating request button, the content to be identified input by the user can be obtained from the first interface of the client side so as to carry out subsequent processing.
In one possible implementation, after the content to be identified is obtained, further conversion of the content to be identified into a language form (e.g., machine language) readable by the AI model of the first category is required.
S502, inputting the content to be identified into the AI model of the first category based on the target model effect display service corresponding to the AI model of the first category so as to obtain an identification result corresponding to the content to be identified, and displaying the identification result in a first interface of the client.
In one possible implementation manner, the first class AI model may be called based on the target model effect display service corresponding to the first class AI model, so as to identify and analyze the content to be identified through the first class AI model, and obtain an identification result corresponding to the content to be identified.
In one possible implementation manner, after the identification result corresponding to the content to be identified is obtained, the identification result corresponding to the content to be identified can be sent to a first interface of the client through a target model effect display service corresponding to the AI model of the first category and displayed, so that the identification result of the content to be identified is displayed to the user, and the user can check the model effect of the AI model of the first category through the interface of the client.
It can be understood that after configuring the pre-created model effect display service to obtain the target model effect display service and releasing the target model effect display service on line, a user may input the content to be identified in the interface of the client, click the initiate identification request button, trigger the target model effect display service to call the corresponding AI model, identify and analyze the content to be identified to obtain the identification result corresponding to the content to be identified, and then display the identification result in the interface of the client to display the model effect of the corresponding AI model.
In the embodiment of the disclosure, when a user uses a target model effect display service, content to be identified, which is input by the user in a first interface of a client corresponding to a first type of AI model, may be acquired, and the content to be identified is input into the first type of AI model based on the target model effect display service corresponding to the first type of AI model, so as to acquire an identification result corresponding to the content to be identified, and is displayed in the first interface of the client. When the user uses the interfaces in the clients corresponding to the AI models of different categories, the content to be identified input by the user can be input into the AI model corresponding to the interface, and the corresponding identification result is obtained. Thereby improving the efficiency of displaying the model effect.
As shown in fig. 6, another general model effect display method provided in an embodiment of the present disclosure specifically includes a stage of displaying an AI model effect, that is, corresponding to the second stage, the method may include:
s601, acquiring content to be identified which is input in a first interface of a client by a user.
Note that, the specific description of S601 is the same as S501, and is not described here again.
S602, generating a request message based on the configuration information of the request body format type, the configuration information of the request body template and the configuration information of the request body dynamic parameters.
The request message comprises the content to be identified and authentication information.
In one possible implementation manner, after the content to be identified input in the first interface of the client is acquired, the content to be identified may be nested into a corresponding request body template according to the configuration information of the request body format type, the configuration information of the request body template and the configuration information of the request body dynamic parameter, so as to generate a corresponding request body or called a request message.
Alternatively, after generating the corresponding request message, the target model effect presentation service may request invocation of the AI model of the corresponding category through the request message.
S603, authenticating the request message based on the authentication information.
In one possible implementation manner, since the corresponding authentication information is added to the request header of the request message (see the corresponding content in S404 for a specific description), it can be determined whether the authentication information carried in the request message is consistent with the authentication information of the AI model, so as to authenticate the request message and determine the validity of the access.
Alternatively, the AI model is only allowed to be invoked when authentication information included in the request header of the request message passes authentication, otherwise the AI model is not allowed to be invoked.
S604, after authentication is successful, calling the AI model of the first category based on the request message and the configuration information of the model interface request address so as to input the content to be identified into the AI model of the first category.
In one possible implementation, the AI models of the first category may be accessed based on configuration information of the model interface request address to forward the request message to the AI models of the first category via the model interface request address.
In one possible implementation manner, after the request message is forwarded to the AI model of the first class through the model interface request address, the AI model of the first class may obtain the content to be identified carried in the request message, and perform identification analysis processing on the content to be identified, so as to obtain a corresponding identification result.
In one possible implementation manner, after the AI model of the first class obtains the identification result corresponding to the content to be identified, the identification result may be carried in the response message and returned to the server of the target model effect display service.
S605, acquiring a response message returned by the AI model of the first category.
The response message comprises a recognition result corresponding to the content to be recognized.
S606, determining a recognition result corresponding to the content to be recognized from the response message based on the configuration information of the returned body format type, the configuration information of the returned body template and the configuration information of the returned body dynamic parameter, and displaying the recognition result in the first interface of the client.
In one possible implementation manner, the server of the target model effect display service receives a response message carrying a recognition result returned by the AI model of the first category, and determines the recognition result corresponding to the content to be recognized from the response message (may also be referred to as a return body) based on the configuration information of the return body format type, the configuration information of the return body template and the configuration information of the return body dynamic parameter, so that the recognition result is displayed in the first interface of the client.
In the embodiment of the disclosure, after the content to be identified input in the first interface of the client is obtained, a request message carrying the content to be identified and authentication information can be generated based on the configuration information of the format type of the request body, the configuration information of the template of the request body and the configuration information of the dynamic parameter of the request body which are configured in advance, and the request message is authenticated based on the authentication information; and calling the AI model of the first category based on the request message and the configuration information of the model interface request address after the authentication is successful, so that the content to be identified is input into the AI model of the first category. And the identification analysis is carried out on the content to be identified through the AI model of the first category, so that a corresponding identification result is obtained, and a response message carrying the identification result is received, wherein the response message carries the identification result and is returned by the AI model of the first category. Further, based on the configuration information of the returned body format type, the configuration information of the returned body template and the configuration information of the returned body dynamic parameters, the identification result corresponding to the content to be identified can be determined from the response message. According to the method, the AI model to be called can be determined according to the interface of the content to be identified, which is input by the user, so that the efficiency of generating the identification result corresponding to the content to be identified can be improved.
As shown in fig. 7, an implementation logic diagram corresponding to a general model effect display method provided by an embodiment of the present disclosure is illustrated, after a model effect display service is created, a model type to be configured currently needs to be determined first, so that a display effect of a display interface of a client is determined based on the model type to be configured, and a corresponding front end style code (front end style codes corresponding to AI models of different types are different, and interface effects are different) is generated through a page effect configuration module. Further, interface parameters (i.e., personalized interface parameters including a model interface request address, authentication information, a request body format type, a return body format type, a request body template, a return body template, a request body dynamic parameter, a return body dynamic parameter, etc.) of any type of AI model are obtained, and a pre-created model effect display service is configured according to configuration information of the interface parameters of any type of AI model. Specifically, after the configuration information of the interface parameters is configured in the pre-created model effect display service, the server can generate a model interface request code through an interface request configuration module and a result analysis configuration module in a background configuration module, so as to obtain a target model effect display service corresponding to any type of AI model.
Further, when the user needs to view the identification result of the content to be identified by the AI model through the target model effect display service, the user can input the content to be identified in the interface of the client, so that the server can acquire the content to be identified through the interface (i.e. the front end style code) of the client, and initiate a request to the target model effect display service (model interface request code), so that a request message is generated based on the target model effect display service corresponding to any type of AI model, the configuration information of the request body format type, the configuration information of the request body template and the configuration information of the request body dynamic parameter. After the authentication of the request message is successful through the authentication information, a call request is initiated to the AI model through the configuration information of the model interface request address so as to input the content to be identified into the AI model. The AI model generates a corresponding identification result based on the content to be identified and returns a response message comprising the identification result to a target model effect display service (model interface request code), and the target model effect display service analyzes the identification result from the response message and sends the identification result to a front-end display code to be displayed in an interface of the client for a user to check based on the configuration information of the returned body format type, the configuration information of the returned body template and the configuration information of the returned body dynamic parameter.
The method and the device realize the flexible calling of interfaces of various AI models by the client interface through flexible configuration of the interface request message and layered marking of dynamic parameters of the AI model request body and the return body, and display the identification result of the AI model. The AI model effect display mode based on configuration and marking saves a great deal of customization development work caused by AI model interface difference, and greatly shortens the online time of AI model effect display service. The method can meet the effect display requirements of different types of models, repeated development and customization are not needed, and the development cost of visual display of the models is effectively reduced. Under the demand scene that needs show a large amount of model effects, can promote the release speed of model effect show, promote customer's satisfaction. And when the model is iterated and changed, the adaptation of the model interface can be directly carried out by modifying the configuration parameter information, so that the model interface is more flexible and efficient.
Based on the technical scheme, the method and the device can configure the pre-created model effect display service based on interface parameters of the artificial intelligence AI models of a plurality of categories to obtain the target model effect display service corresponding to the AI model of each category, so that the model effect of the AI model of the corresponding category is displayed in the interface of the client based on the content to be identified input by the user through the target model effect display service corresponding to the AI model of each category. Therefore, when the user inputs the content to be identified in the interface of the client, the service display model effect can be displayed through the target model effect corresponding to the AI model of the corresponding category. By the method, the model effect display service which is created in advance can be configured according to the interface parameters of the AI models of different types aiming at the AI models of different types, and the target model effect display service corresponding to the AI model of each type is generated. And the personalized service interface of each AI model is adapted without respectively carrying out customized development work according to different AI models so as to develop and obtain the exclusive front end display interface corresponding to each AI model, so that the model effect of each AI model can be displayed. Therefore, the efficiency of developing and obtaining the special model effect display service corresponding to the AI models of different types can be improved, and the development workload is reduced.
The foregoing description of the embodiments of the present disclosure has been presented primarily in terms of computer apparatus. It will be appreciated that the computer device, in order to carry out the functions described above, comprises corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative method steps described in connection with the embodiments disclosed herein may be implemented as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiment of the disclosure may divide the functional modules or functional units of the general model effect display manner according to the above method examples, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware, or in software functional modules or functional units. The division of modules or units in the embodiments of the present disclosure is merely a logic function division, and other division manners may be actually implemented.
Fig. 8 is a schematic structural diagram of a general model effect display device according to an embodiment of the disclosure. The general model effect display device may include: a creation unit 801, an acquisition unit 802, and a processing unit 803.
A creating unit 801, configured to create a model effect display service, where the model effect display service is used to display a model effect in an interface of a client; an obtaining unit 802, configured to obtain interface parameters of a plurality of classes of artificial intelligence AI models, where the interface parameters of the AI models are used to invoke the AI models; the processing unit 803 is configured to configure the model effect display service according to the interface parameters of the AI model for each type of AI model, so as to obtain a target model effect display service corresponding to each type of AI model; the target model effect display service is used for displaying model effects of AI models of corresponding categories in an interface of the client based on the content to be identified input by the user.
Optionally, interfaces that exhibit model effects for different classes of AI models are different.
Optionally, the acquiring unit 802 is further configured to acquire configuration information of the interface parameter; the processing unit 803 is further configured to configure configuration information of the interface parameters into a model effect exhibition service.
Optionally, the acquiring unit 802 is further configured to acquire content to be identified input by a user in a first interface of the client, where the first interface corresponds to a first class of AI models, and the first class of AI models is included in the plurality of classes of AI models; the processing unit 803 is further configured to input the content to be identified into the AI model of the first category based on the target model effect display service corresponding to the AI model of the first category, so as to obtain an identification result corresponding to the content to be identified, and display the identification result in the first interface of the client.
Optionally, the interface parameters include at least one of: the model interface requests an address, authentication information, a request body format type, a return body format type, a request body template, a return body template, a request body dynamic parameter, and a return body dynamic parameter.
Optionally, the processing unit 803 is further configured to generate a request message based on the configuration information of the request format type, the configuration information of the request template, and the configuration information of the request dynamic parameter, where the request message includes the content to be identified and authentication information; the processing unit 803 is further configured to authenticate the request message based on the authentication information; the processing unit 803 is further configured to invoke an AI model of the first class based on the request message and the configuration information of the model interface request address after the authentication is successful, so as to input the content to be identified into the AI model of the first class; the obtaining unit 802 is further configured to obtain a response message returned by the first class AI model, where the response message includes a recognition result corresponding to the content to be recognized; the processing unit 803 is further configured to determine, from the response message, a recognition result corresponding to the content to be recognized based on the configuration information of the returned style type, the configuration information of the returned style plate, and the configuration information of the returned style dynamic parameter.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data required for the operation of the electronic device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the respective methods and processes described above, such as a general model effect presentation method. For example, in some embodiments, the general model effect presentation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the above-described general model effect presentation method may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the generic model effect presentation method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (15)

1. A universal model effect display method, comprising:
creating a model effect display service, wherein the model effect display service is used for displaying model effects in an interface of a client;
acquiring interface parameters of a plurality of classes of artificial intelligence AI models, wherein the interface parameters of the AI models are used for calling the AI models;
configuring the model effect display service according to interface parameters of the AI model aiming at the AI model of each category to obtain a target model effect display service corresponding to the AI model of each category; the target model effect display service is used for displaying model effects of AI models of corresponding categories in an interface of the client based on the content to be identified input by the user.
2. The method of claim 1, wherein interfaces exhibiting model effects for AI models of different categories are different.
3. The method of claim 1 or 2, wherein configuring the model effect presentation service according to interface parameters of the AI model comprises:
acquiring configuration information of the interface parameters;
and configuring the configuration information of the interface parameters into the model effect display service.
4. A method according to claim 2 or 3, wherein the method further comprises:
acquiring content to be identified input by a user in a first interface of the client, wherein the first interface corresponds to a first class of AI models, and the first class of AI models are included in the plurality of classes of AI models;
and inputting the content to be identified into the AI model of the first category based on the target model effect display service corresponding to the AI model of the first category so as to obtain an identification result corresponding to the content to be identified, and displaying the identification result in a first interface of the client.
5. The method of any of claims 1-4, wherein the interface parameters include at least one of: the model interface requests an address, authentication information, a request body format type, a return body format type, a request body template, a return body template, a request body dynamic parameter, and a return body dynamic parameter.
6. The method of claim 5, wherein inputting the content to be identified into the AI model of the first category based on a target model effect presentation service corresponding to the AI model of the first category to obtain an identification result corresponding to the content to be identified comprises:
generating a request message based on the configuration information of the format type of the request body, the configuration information of the template of the request body and the configuration information of the dynamic parameters of the request body, wherein the request message comprises the content to be identified and the authentication information;
authenticating the request message based on the authentication information;
after authentication is successful, calling the AI model of the first category based on the request message and the configuration information of the model interface request address so as to input the content to be identified into the AI model of the first category;
acquiring a response message returned by the AI model of the first category, wherein the response message comprises a recognition result corresponding to the content to be recognized;
and determining the identification result corresponding to the content to be identified from the response message based on the configuration information of the returned body format type, the configuration information of the returned body template and the configuration information of the returned body dynamic parameter.
7. A universal model effect display device, comprising:
the system comprises a creation unit, a display unit and a display unit, wherein the creation unit is used for creating a model effect display service, and the model effect display service is used for displaying model effects in an interface of a client;
the system comprises an acquisition unit, a calculation unit and a calculation unit, wherein the acquisition unit is used for acquiring interface parameters of a plurality of classes of artificial intelligence AI models, and the interface parameters of the AI models are used for calling the AI models;
the processing unit is used for configuring the model effect display service according to the interface parameters of the AI model aiming at the AI model of each category so as to obtain a target model effect display service corresponding to the AI model of each category; the target model effect display service is used for displaying model effects of AI models of corresponding categories in an interface of the client based on the content to be identified input by the user.
8. The general model effect presentation apparatus of claim 7, wherein interfaces presenting model effects of AI models of different categories are different.
9. The general model effect display device according to claim 7 or 8, wherein,
the acquisition unit is further used for acquiring configuration information of the interface parameters;
the processing unit is further configured to configure configuration information of the interface parameters into the model effect display service.
10. The universal model effect display device according to claim 8 or 9, wherein,
the acquiring unit is further configured to acquire content to be identified, where the content is input by a user in a first interface of the client, and the first interface corresponds to a first class of AI models, where the first class of AI models is included in the plurality of classes of AI models;
the processing unit is further configured to input the content to be identified into the AI model of the first class based on a target model effect display service corresponding to the AI model of the first class, so as to obtain an identification result corresponding to the content to be identified, and display the identification result in a first interface of the client.
11. The universal model effect display device of any one of claims 7-10, wherein the interface parameters include at least one of: the model interface requests an address, authentication information, a request body format type, a return body format type, a request body template, a return body template, a request body dynamic parameter, and a return body dynamic parameter.
12. The universal model effect display device according to claim 11, wherein,
the processing unit is further configured to generate a request message based on the configuration information of the request format type, the configuration information of the request template, and the configuration information of the request dynamic parameter, where the request message includes the content to be identified and the authentication information;
The processing unit is further used for authenticating the request message based on the authentication information;
the processing unit is further configured to invoke the AI model of the first class based on the request message and the configuration information of the model interface request address after authentication is successful, so as to input the content to be identified into the AI model of the first class;
the acquiring unit is further configured to acquire a response message returned by the AI model of the first class, where the response message includes an identification result corresponding to the content to be identified;
the processing unit is further configured to determine, from the response message, an identification result corresponding to the content to be identified based on the configuration information of the returned body format type, the configuration information of the returned body template, and the configuration information of the returned body dynamic parameter.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
CN202310185117.0A 2023-02-24 2023-02-24 Universal model effect display method, device, equipment and storage medium Pending CN116225424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310185117.0A CN116225424A (en) 2023-02-24 2023-02-24 Universal model effect display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310185117.0A CN116225424A (en) 2023-02-24 2023-02-24 Universal model effect display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116225424A true CN116225424A (en) 2023-06-06

Family

ID=86572755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310185117.0A Pending CN116225424A (en) 2023-02-24 2023-02-24 Universal model effect display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116225424A (en)

Similar Documents

Publication Publication Date Title
US20210075749A1 (en) Intelligent, adaptable, and trainable bot that orchestrates automation and workflows across multiple applications
US11132114B2 (en) Method and apparatus for generating customized visualization component
CN115485690A (en) Batch technique for handling unbalanced training data of chat robots
US8539514B2 (en) Workflow integration and portal systems and methods
US10331765B2 (en) Methods and apparatus for translating forms to native mobile applications
US20210208854A1 (en) System and method for enhancing component based development models with auto-wiring
CN103842988A (en) Network-based custom dictionary, auto-correction and text entry preferences
CN113268336B (en) Service acquisition method, device, equipment and readable medium
CN113393553A (en) Method and device for generating flow chart and electronic equipment
JP2023551325A (en) Method and system for overprediction in neural networks
CN111857674A (en) Business product generation method and device, electronic equipment and readable storage medium
CN110070394A (en) Data processing method, system, medium and calculating equipment
US20180121441A1 (en) Accessing application services from forms
CN111078202A (en) Service architecture model maintenance method, device, electronic equipment and medium
CN110889670A (en) Service approval system, method and device and computer readable storage medium
CN116225424A (en) Universal model effect display method, device, equipment and storage medium
CN115033233A (en) Interface calling method and device, electronic equipment and storage medium
CN106998350B (en) Method and system for using frame based on function item of cross-user message
US20220284371A1 (en) Method, device and medium for a business function page
CN112231336B (en) Method and device for identifying user, storage medium and electronic equipment
CN112732547B (en) Service testing method and device, storage medium and electronic equipment
CN112669000A (en) Government affair item processing method and device, electronic equipment and storage medium
CN112560462B (en) Event extraction service generation method, device, server and medium
CN113157360B (en) Method, apparatus, device, medium, and article for processing an API
CN113128187B (en) Form generation method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination