CN117785149A - Application generation method, related device, equipment and storage medium - Google Patents

Application generation method, related device, equipment and storage medium Download PDF

Info

Publication number
CN117785149A
CN117785149A CN202311748550.7A CN202311748550A CN117785149A CN 117785149 A CN117785149 A CN 117785149A CN 202311748550 A CN202311748550 A CN 202311748550A CN 117785149 A CN117785149 A CN 117785149A
Authority
CN
China
Prior art keywords
text
intention
target application
application
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311748550.7A
Other languages
Chinese (zh)
Inventor
王晨阳
王龙生
金晖
王瑞
徐甲甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingyang Industrial Internet Co ltd
Original Assignee
Lingyang Industrial Internet Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingyang Industrial Internet Co ltd filed Critical Lingyang Industrial Internet Co ltd
Priority to CN202311748550.7A priority Critical patent/CN117785149A/en
Publication of CN117785149A publication Critical patent/CN117785149A/en
Pending legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The application discloses an application generation method, a related device, equipment and a storage medium, wherein the application generation method comprises the following steps: acquiring a user description text of a target application; based on the user description text, extracting the key word information and the logic relation between the elements related to the target application; generating intention detail text based on the keyword information and the logic relation; determining a target engine matched with the intention detail text and generating structural data of the intention detail text; based on the target engine and the structural data, a target application is generated. By the aid of the scheme, the technical threshold of application generation can be reduced as much as possible, and the generation efficiency of the target application is improved.

Description

Application generation method, related device, equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an application generating method, and related apparatus, device, and storage medium.
Background
With the continuous development of information technology, the application requirements of users are met through the construction of Web applications.
In the prior art, customized applications meeting user requirements are generally obtained based on professional programming languages provided by users, on one hand, the technical threshold of application generation is high, and on the other hand, the efficiency of application generation is low. In view of this, how to reduce the technical threshold of application generation as much as possible and to improve the generation efficiency of the target application is a problem to be solved.
Disclosure of Invention
The technical problem to be solved mainly in the application is to provide an application generation method, a related device, equipment and a storage medium, which can reduce the technical threshold of application generation as much as possible and improve the generation efficiency of target application.
In order to solve the above technical problem, a first aspect of the present application provides an application generating method, including: acquiring a user description text of a target application; based on the user description text, extracting the key word information and the logic relation between the elements related to the target application; generating intention detail text based on the keyword information and the logic relation; determining a target engine matched with the intention detail text and generating structural data of the intention detail text; based on the target engine and the structural data, a target application is generated.
In order to solve the above technical problem, a second aspect of the present application provides an application generating apparatus, including: the system comprises an acquisition module, an extraction module, an identification module, a determination module and a generation module, wherein the acquisition module is used for acquiring a user description text of a target application; the extraction module is used for extracting the logic relation between the keyword information and each element related to the target application based on the user description text; the recognition module is used for generating intention detail text based on the keyword information and the logic relation; the determining module is used for determining a target engine matched with the intention detail text and generating structural data of the intention detail text; the generation module is used for generating the target application based on the target engine and the structural data.
In order to solve the above technical problem, a third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, where the memory stores program instructions, and the processor is configured to execute the program instructions to implement the application generating method in the first aspect.
In order to solve the above technical problem, a fourth aspect of the present application provides a computer readable storage medium storing program instructions executable by a processor, where the program instructions are configured to implement the application generating method of the first aspect.
According to the scheme, after the user description text of the target application is obtained, the keyword information and the logic relation between the related elements of the target application are extracted based on the user description text and are used for understanding the user description text, the intention detail text is generated based on the keyword information and the logic relation, the target engine matched with the intention detail text is determined based on the intention detail text, the structure data of the intention detail text is generated, and the target application is generated based on the target engine and the structure data, so that the intention detail text which is as effective as possible is generated based on the user description text, and auxiliary information required for automatically generating the target application is provided based on the intention detail text, so that the technical threshold of application generation can be reduced as much as possible, and the generation efficiency of the target application is improved.
Drawings
FIG. 1 is a flow diagram of one embodiment of an application generation method of the present application;
FIG. 2 is a flow diagram of another embodiment of the application generation method of the present application;
FIG. 3 is a schematic diagram of a framework of an embodiment of the application generating apparatus of the present application;
FIG. 4 is a schematic diagram of a framework of an embodiment of the electronic device of the present application;
FIG. 5 is a schematic diagram of a framework of one embodiment of a computer readable storage medium of the present application.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "/" herein generally indicates that the associated object is an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of an application generating method of the present application.
Specifically, the method may include the steps of:
step S10: and acquiring the user description text of the target application.
In the embodiment of the disclosure, the user description text is determined based on the target instruction input by the user, and the type of the target instruction is exemplified by text, audio, image and the like, which are not exemplified here.
In one implementation scenario, the user descriptive text is natural language text used to express design requirements for the target application. Natural language text is based on a natural, non-formalized language system used by humans, and can be used for communication, expressing views, conveying information, etc., and it should be noted that the language type of the user descriptive text is not limited in this application, such as chinese, english, french, etc.
In one implementation scenario, when the type of the target instruction is an audio instruction, a user description text is obtained based on the audio instruction recognition, specifically, audio preprocessing is performed based on the audio instruction, and then audio recognition is performed based on an audio signal after the audio preprocessing, so that the user description text is obtained, and the accuracy of the obtained user description text is improved.
In a specific implementation scenario, the audio preprocessing includes converting the collected audio instruction into a digital signal, and performing operations such as noise reduction, enhancement, normalization and the like based on the converted digital signal, so that noise and interference in the audio instruction are reduced as much as possible, and effective features in the audio instruction are highlighted, so that the accuracy of converting the subsequent audio instruction into the user description text can be improved. The specific principles may be referred to more technical details of audio processing and are not described here in detail.
In a specific implementation scenario, as a possible implementation manner, an audio recognition model may be trained in advance, where the audio recognition model may include, but is not limited to, a network model of an Encoder-Decoder architecture, and the like, and after obtaining an audio signal after audio instruction is subjected to audio preprocessing, the audio signal is input into the audio recognition model, and an output result of the audio recognition model is used as a user description text. In order to ensure the recognition accuracy of the audio recognition model as much as possible, sample audio instructions can be collected, real description texts are marked on the sample audio instructions, on the basis of the sample audio instructions, the sample audio instructions can be processed based on the audio recognition model to obtain predicted description texts of the sample audio instructions, and therefore network parameters of the audio recognition model can be adjusted based on differences between the sample description texts and the predicted description texts until the audio recognition model is trained and converged, and the audio instructions can be processed based on the audio recognition model which is trained and converged to obtain user description texts of the audio instructions. It should be noted that, for the specific processing procedure of the audio recognition model, reference may be made to technical details such as a network model of the Encoder-Decoder architecture, and the details are not repeated herein. According to the method, the trained audio recognition model is used for recognizing the target instruction with the instruction type of audio, and on the premise of improving the convenience of inputting the instruction by a user, the accuracy of acquiring the user description text is improved as much as possible, so that the generation quality of the target application is improved.
In another implementation scenario, when the type of the target instruction is an image instruction, a user description text is obtained based on image instruction recognition, specifically, image preprocessing is performed based on the image instruction, and then text recognition is performed based on an image signal after the image preprocessing, so that the user description text is obtained, and the accuracy of the obtained user description text is improved.
In a specific implementation scenario, the image preprocessing includes operations such as denoising, enhancing, segmentation, and the like, so that noise and interference in an image instruction are reduced as much as possible, and effective features in the image instruction are highlighted to improve accuracy of image text recognition. The specific principle can refer to more technical details of image processing, and will not be described herein.
In a specific implementation scenario, as a possible implementation manner, an image recognition model may be trained in advance, where the image recognition model is constructed based on technologies such as OCR (Optical Character Recognition ), and may include, but is not limited to, a DB algorithm (Differentiable Binarization, micro binarizable), a network model of CRNN (Convolutional Recurrent Neural Network, convolutional cyclic neural network) architecture, and the like, and after obtaining image data of an image instruction after image preprocessing, the image data is input into the image recognition model, and an output result of the image recognition model is used as a user description text. In order to ensure the recognition accuracy of the image recognition model as much as possible, sample image instructions can be collected, real description texts are marked on the sample image instructions, on the basis of the sample image instructions, the sample image instructions can be processed based on the image recognition model to obtain predicted description texts of the sample image instructions, and therefore network parameters of the image recognition model can be adjusted based on differences between the sample description texts and the predicted description texts until the image recognition model is trained and converged, and the image instructions can be processed based on the image recognition model which is trained and converged to obtain user description texts of the image instructions. It should be noted that, for the specific processing procedure of the image recognition model, reference may be made to technical details such as OCR, etc., which will not be described herein. According to the method, the trained image recognition model is used for recognizing the target instruction with the instruction type of the image, and on the premise of improving the convenience of inputting the instruction by the user, the accuracy of acquiring the user description text is improved as much as possible, so that the generation quality of the target application is improved.
Step S20: based on the user description text, keyword information and logical relations between various elements involved in the target application are extracted.
In one implementation scenario, keyword information is extracted based on user descriptive text, and effective information in the user descriptive text is screened out to improve quality of the generated intent recognition text. It should be noted that, the method for extracting the keyword information is not limited in this application, for example, extracting the keyword information based on the text triplet, etc., which are not illustrated here.
In a specific implementation scenario, keyword information in a user description text is extracted based on a preset text attribute tag, and the preset text attribute tag may include a "task type" and a "task content", for example, the user description text is "create an application that may be used for leave, an application includes a form, a form field includes a leave applicant and a leave time", the user description text is identified based on the preset text attribute tag, keyword information corresponding to the "task type" tag is extracted to obtain feature data corresponding to the "create leave application", and keyword information corresponding to the "task content" tag is feature data corresponding to the "form field includes the leave applicant and the leave time".
In one implementation scenario, the logical relationships between the elements involved in the target application are obtained based on user descriptive text, which illustratively "please help me create a please leave application" for the user descriptive text, the form field contains: dummy requesting, time requesting, reason requesting; the approval process is as follows: after the direct upper-level approval of the applicant, the personnel management approval is carried out, the elements of the applicant application, the form field, the applicant time, the applicant margin, the approval process, the applicant direct upper-level personnel management are extracted, the applicant application comprises the form and the approval process, the form field comprises the applicant, the applicant time and the applicant margin, the approval process comprises the applicant and the personnel, the applicant direct upper-level and the personnel management have progressive relations, and the like. According to the method, the auxiliary information is provided for generating the intention detail text based on the logic relations among the elements related to the target application, and the design requirements of the target application are represented through the elements and the logic relations.
In a specific implementation scenario, after the logic relationship between the elements related to the target application is obtained, a relationship chain between the elements is built based on the logic relationship, or the elements are marked based on the relationship label, so that the generation efficiency of the intention detail text is improved.
In a specific implementation scenario, the user descriptive text is input into a text generation model, and output content of the text generation model is used as an intention detail text, and the text generation model belongs to a large language model, so that natural language input of human beings can be understood, and the keyword information about the input text and the logic relation among various elements are obtained according to the input text to generate semantically related output. The specific principles may refer to more technical details of the large language model, and are not described herein. It should be noted that the large language model may include, but is not limited to: the network architecture of the large language model is not limited herein by the LLAMA, bloom, etc. open source large model.
Step S30: intent detail text is generated based on the keyword information and the logical relationship.
In one implementation scenario, the intention detail text is a natural language text, an exemplary user description text is "please help me modify the leave application", a form field is newly added with a leave type, a self-selection approver is changed into a self-selection approver, the approver comprises a leave direct upper level ", the intention detail text is generated based on keyword information and a logic relation in the description text, the application with the name of leave is required to be updated, and the content required to be updated comprises two parts: the first part is to add a self-selection approver field for approving the bill, and the field needs to acquire the organization structure of the current user. The second part is that the approval process needs to insert ' self-choosing approver ' node ' before ' applicant of the original approval template directly goes to the upper-level ' node. According to the method, through understanding the user description text, the intention detail text which is as effective as possible is generated based on the user description text, and the intention detail text can provide auxiliary information required for automatically generating the target application, so that the technical threshold of application generation can be reduced as much as possible, and the generation efficiency of the target application is improved.
In one particular implementation scenario, after generating the intent details text, a presentation page is displayed that includes a presentation area for displaying the intent details text, and the user may select and edit the already generated intent details text based on an edit control in the presentation area to update the intent details text for generating the target application or to determine that the generated intent details text is consistent with the application design requirements expressed by the user description text. According to the method, the opinion detail text of the natural language text is convenient for a user to understand, the reference opinion of the user on the intention detail text is acquired based on the interaction page, whether the generated intention detail text is consistent with the application requirement expressed by the user is determined, and the intention detail text is corrected in time through human intervention of the user when the generated intention detail text is inconsistent, so that the quality and the efficiency of generating the target application based on the intention detail text can be improved.
In a specific implementation scenario, after the intention detail text is displayed in the display area, responding to the trigger of a user on a confirmation control for determining the intention detail text in the display page, or generating a target application based on the intention detail text in the current display page when the display time of the intention detail text is not less than a preset time, so that the generation efficiency of the target application is improved.
In addition, in order to ensure the generation efficiency of the subsequent target application as much as possible, on the premise that the network environment, hardware resources and the like are abundant, the user can generate the target application based on the generated intention detail text and display the generated target application to the user on the premise that the intention detail text is not modified by the user and the consistency of the intention detail text and the expression content of the user description text is not confirmed after the intention detail text is displayed on the display page, and the user can assist in judging whether the intention detail file needs to be modified or not based on the application content of the generated target application. The above examples are only typical examples in the actual application process, and thus the generation timing of the target application is not limited.
In one implementation scenario, classification is performed based on keyword information to obtain a first keyword representing a task type and a second keyword representing task content, the task type is a new or modified target application, the task content at least comprises specific content of each element, a first intention text describing the task type is obtained based on the first keyword, a second intention text describing the task content is obtained based on the second keyword and a logical relation, a first detection result representing whether the target application can be generated according to the user description text is obtained based on at least any one of the first intention text and the second intention text, whether the target application can be generated according to the user description text is represented based on the first detection result, whether execution is based on the first intention text and the second intention text is determined, and the intention text is generated. According to the method, based on the first intention text for describing the task content and the second intention text for describing the task content, whether the user description text has logic errors in execution is judged, and on the premise that the application generation quality is improved as much as possible, the efficiency of automatic application generation is improved.
In a specific implementation scenario, as a possible implementation manner, a classification model may be trained in advance, and the classification model may be used to classify each keyword information to obtain a first keyword that characterizes a task type and a second keyword that characterizes task content, where the classification model may include, but is not limited to, a network model of a two-classification network model architecture, and so on. It should be noted that, the specific processing and training process of the classification model may refer to technical details of a network model such as a two-class network model architecture, and will not be described herein.
In one implementation scenario, a first detection result representing whether the target application can be generated according to the user description text is obtained based on the first intention text, all existing applications are obtained, a second detection result representing whether the target application is the existing application is obtained based on the existing applications, the first detection result is obtained based on the second detection result and the first intention text, whether the target application can be generated according to the user description text is determined based on the first detection result, and on the premise that the generation quality of the application is improved as much as possible, the generation efficiency of the automatic application is improved.
In a specific implementation scenario, character matching is performed based on an application name of an existing application and an application name of a target application, and whether the target application is the existing application is determined based on a character matching condition. It should be noted that the above method for determining whether the target application is an existing application is only one possible embodiment, and the method for determining whether the target application is an existing application is not limited in this application, for example, the method is determined based on the application types of the existing application and the target application, and is not illustrated for brevity.
In a specific implementation scenario, when the first intention text is a modified target application and the second detection result representation is an existing application, it is determined that the first detection result representation can generate the target application according to the user description text, and there is no execution logic error.
In another specific implementation scenario, when the first intention text is a modified target application and the second detection result representation is not an existing application, it is determined that the first detection result representation cannot generate the target application according to the user description text, and there is an execution logic error.
In another specific implementation scenario, when the first intention text is a newly built target application and the second detection result representation target application is an existing application, it is determined that the first detection result representation cannot generate the target application according to the user description text, and there is an execution logic error.
In another specific implementation scenario, when the first intention text is a newly built target application and the second detection result representation target application is not an existing application, it is determined that the first detection result representation can generate the target application according to the user description text, and there is no execution logic error.
In one implementation scenario, a first detection result representing whether the target application can be generated according to the user description text is obtained based on the second intention text, a preset engine library is obtained, a plurality of callable preset engines are included in the preset engine library, a prediction engine to be called is determined based on the second intention text, the first detection result is obtained based on whether the prediction engine belongs to the preset engine library, whether the target application can be generated according to the user description text is determined based on the first detection result, and the efficiency of automatic application generation is improved on the premise that the application generation quality is improved as much as possible.
It can be understood that the number and types of the plurality of preset engines in the preset engine library are not limited in the application, and the prediction engine to be invoked corresponds to the design requirement expressed by the user description text, and the method for determining the prediction engine to be invoked based on the second intention text is not limited in the application, for example, based on preset mapping relation matching and the like.
In a specific implementation scenario, when the prediction engine does not belong to the preset engine library, it is determined that the first detection result indicates that the target application cannot be generated according to the user description text.
In another specific implementation scenario, when the prediction engine belongs to a preset engine library, it is determined that the first detection result characterization can generate the target application according to the user description text.
In one implementation scenario, a first detection result characterizing whether the target application can be generated according to the user description text is obtained based on the first intention text and the second intention text, and the first detection result characterizing the target application can be generated according to the user description text if and only if the detection result characterizing the target application can be generated according to the user description text based on the first intention text and the detection result characterizing the target application can be generated according to the user description text based on the second intention text. It should be noted that, specific steps of obtaining the detection result based on the first intention text and obtaining the detection result based on the second intention text may refer to the detailed description of the above embodiments, and are not repeated herein for brevity.
In one implementation scenario, when the first detection result representation can generate the target application according to the user description text, generating an intention generation text based on the first intention text and the second intention text, wherein the user description text is an example of "please help me create a please leave application", and the form field comprises a please leave person, a please leave time and a please leave reason; the approval process is as follows: the applicant directly goes up, the personnel manager "analyzes and generates a first intention text as" create an application, the name of which is the applicant ", and a second intention text as" the application contains two parts: the first part is an approval document, which contains three fields: dummy, time, reason for asking for the dummy. The second part is an approval process, which comprises two nodes: the applicant directly goes to the upper examination node and the personnel management examination node, and the first intention text and the second intention text are combined to obtain a complete intention detail text. According to the method, whether the design requirement of the user on the target application can be met is determined by understanding the user description text, the intention detail text which is as effective as possible is generated based on the user description text, and auxiliary information required for automatically generating the target application can be provided by the intention detail text, so that the technical threshold of application generation can be reduced as much as possible, and the generation efficiency of the target application is improved.
In another implementation scenario, when the first detection result representation cannot generate the target application according to the user description text, error information for representing a generation reason of the first detection result is obtained and the error information is displayed, and by way of example, when the first detection result representation obtained based on the first intention text cannot generate the target application according to the user description text, specifically, when the first intention text is a modified target application and the target application is not an existing application, the error information "target application to be modified is not present" is displayed, and/or when the first detection result representation obtained based on the second intention text cannot generate the target application according to the user description text, specifically, when the prediction engine does not belong to a preset engine library, the error information "engine for generating the application is not present" is displayed. After the error information is displayed, the step of acquiring the user description text of the target application and the subsequent steps are returned to be executed until the target application is generated. The method displays error information for the user, and is convenient for the user to modify the user description text again so as to improve the generation quality of the target application.
In a specific implementation scenario, the error information is displayed in a preset form, for example, in a popup form, or in a display page, which is not limited in this application.
In one specific implementation scenario, after the error information is displayed, a confirmation control is set on a page on which the error information is displayed, and in response to a user triggering the confirmation control, the page is jumped to the page on which the user description text is input, so as to acquire the user description text re-input by the user. The method is convenient for the user to modify the user description text again so as to improve the generation quality of the target application.
Step S40: a target engine that matches the intent details text is determined and structural data of the intent details text is generated.
In one implementation scenario, a matching target engine is determined based on the intention detail text, it is understood that the target engine matches the design requirements of the target application characterized by the intention detail text, and the target engine can be invoked, and the exemplary intention detail text characterization requires generating an ask-for-leave form and an ask-for-leave flow, and the corresponding target engines to be invoked are a form engine and a workflow engine. It should be noted that the specific type of the target engine is not limited in this application.
In one implementation scenario, the structural data is generated based on the intent details text and is a collection of data elements with structural characteristics, including logical structures for the intent details text data, which may be determined, for example, based on the keyword information in the intent details text and the logical relationships between the various elements involved in the target application.
Step S50: based on the target engine and the structural data, a target application is generated.
In one implementation scenario, the structured data is configured based on preset compilation rules of the target engine to obtain a target application that meets the user's needs. The method improves the universality of the target application design, and improves the efficiency of generating the target application on the premise of reducing the complexity of the user description text.
According to the scheme, after the user description text of the target application is obtained, the keyword information and the logic relation between the related elements of the target application are extracted based on the user description text and are used for understanding the user description text, the intention detail text is generated based on the keyword information and the logic relation, the target engine matched with the intention detail text is determined based on the intention detail text, the structure data of the intention detail text is generated, and the target application is generated based on the target engine and the structure data, so that the intention detail text which is as effective as possible is generated based on the user description text, and auxiliary information required for automatically generating the target application is provided based on the intention detail text, so that the technical threshold of application generation can be reduced as much as possible, and the generation efficiency of the target application is improved.
Referring to fig. 2, fig. 2 is a flowchart of another embodiment of the application generating method of the present application.
Specifically, the method may include the steps of:
step S10: and acquiring the user description text of the target application.
Reference may be made specifically to the foregoing disclosed embodiments, which are not described here in detail for brevity.
Step S20: based on the user description text, keyword information and logical relations between various elements involved in the target application are extracted.
Reference may be made specifically to the foregoing disclosed embodiments, which are not described here in detail for brevity.
Step S30: intent detail text is generated based on the keyword information and the logical relationship.
Reference may be made specifically to the foregoing disclosed embodiments, which are not described here in detail for brevity.
Step S31: several intention texts are obtained that constitute the intention detail text.
In one implementation scenario, the intention detail text includes a plurality of subtasks, and a plurality of intention subtasks are obtained based on each subtask, and the intention detail text includes, as an example, "the first part is an approval document, and the approval document includes three fields: dummy, time, reason for asking for the dummy. The second part is an approval process, which comprises two nodes: the applicant directly carries out a superior examination and approval node and a personnel management examination and approval node, and an intention text A' can be obtained based on the intention detail text to generate an examination and approval bill, wherein the examination and approval bill comprises three fields: please dummy, leave time, leave reason ", and intention text B" generates an approval process comprising two nodes: the applicant directly goes to the upper examination and approval node and the personnel manager examination and approval node.
In a specific implementation scenario, as a possible implementation manner, a semantic segmentation model may be trained in advance, the intention detail text is input into the semantic segmentation model, and the intention detail text is segmented based on the content semantics of the intention detail text, so as to obtain an output result of the semantic segmentation model as a plurality of intention texts. Through the method, a plurality of intention texts in the intention detail text are obtained by using the semantic segmentation model, so that the convenience of generating the intention texts is improved. The model architecture of the semantic segmentation model is not limited in this application, and may be BERT (Bidirectional Encoder Representation from Transformers), for example.
Step S41: target engines that match the intention text are determined separately and structural data of the intention text is generated.
Reference may be made specifically to the foregoing disclosed embodiments, which are not described here in detail for brevity.
Step S51: and obtaining a plurality of subtask data based on the target engine and the structure data of each intention text respectively.
Reference may be made specifically to the foregoing disclosed embodiments, which are not described here in detail for brevity.
Step S52: based on all the subtask data, a target application is generated.
In one implementation scenario, after obtaining a plurality of subtask data based on each intention, binding each subtask data to obtain a target application, and the intention text a is an "generate approval document", where the approval document includes three fields: please dummy, leave time, leave reason, and intention text B is "generate approval process, the approval process includes two nodes: the applicant directly passes through a superior approval node and a personnel management approval node ", determines that the target engine is a form engine and corresponding structural data based on the intention text A so as to generate an approval form, determines that the target engine is a workflow engine and corresponding structural data based on the intention text B so as to generate an approval process, and binds the approval form and the approval process so as to obtain the target application for the applicant.
In addition, on the premise that network environments, hardware resources and the like are abundant, subtask data of each intention text can be synchronously generated in order to ensure the generation efficiency of the subsequent target application as much as possible. The above examples are only typical examples in practical application processes, and thus the generation timing of the subtask data is not limited.
According to the method, after the user description text of the target application is obtained, the keyword information and the logic relation between the related elements of the target application are extracted based on the user description text and are used for understanding the user description text, the intention detail text is generated based on the keyword information and the logic relation, the intention texts in the intention detail text are determined based on the intention detail text, the target engine and the structure data matched with the intention texts are respectively determined, the subtask data are generated based on the target engine and the structure data, and then all the subtask data are bound to generate the target application, so that the data of a plurality of subtasks can be simultaneously constructed on the premise of reducing the complexity of the user description text, and the generation efficiency of the target application is improved.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an embodiment of an application generating apparatus 30 according to the present application. The application generating device 30 comprises an acquiring module 31, an extracting module 32, an identifying module 33, a determining module 34 and a generating module 35, wherein the acquiring module 31 is used for acquiring a user description text of a target application; the extracting module 32 is configured to extract, based on the user description text, a logical relationship between keyword information and each element involved in the target application; the recognition module 33 is configured to generate an intention detail text based on the keyword information and the logical relationship; the determining module 34 is configured to determine a target engine that matches the intention detail text and generate structural data of the intention detail text; the generating module 35 is configured to generate the target application based on the target engine and the structural data.
In the above-described aspect, after the application generating device 30 acquires the user description text of the target application, extracts the keyword information and the logical relationship between the respective elements involved in the target application based on the user description text, generates the intention detail text based on the keyword information and the logical relationship, determines the target engine matching the intention detail text based on the intention detail text, and generates the structure data of the intention detail text, and generates the target application based on the target engine and the structure data, so that the intention detail text is generated as effectively as possible based on the user description text, and auxiliary information required for automatically generating the target application is provided based on the intention detail text, so that the technical threshold for generating the application can be reduced as much as possible, and the generating efficiency of the target application can be improved.
In some disclosed embodiments, the identifying module 33 further includes a detecting module (not shown) for classifying based on the keyword information to obtain a first keyword characterizing the task type and a second keyword characterizing the task content; the task type is a new or modified target task, and the task content at least comprises the specific content of each element; obtaining a first intention text describing task types based on the first keywords, and obtaining a second intention text describing task contents based on the second keywords and the logic relations; obtaining a first detection result representing whether the target application can be generated according to the user description text based on at least any one of the first intention text and the second intention text; whether the target application can be generated according to the user description text is characterized based on the first detection result, whether the generation of the intention detail text based on the first intention text and the second intention text is executed or not is determined.
In some disclosed embodiments, the detection module further includes a first detection module (not shown) configured to obtain a second detection result that characterizes whether the target application is an existing application; and obtaining a first detection result based on the second detection result and the first intention text.
In some disclosed embodiments, the first detection module further includes a detection determination module (not shown) for determining that the first detection result representation is capable of generating the target application according to the user description text in response to the first intention text being the modification target task and the second detection result representation being the existing application; responding to the first intention text as a modification target task, and determining that the second detection result representation target application is not an existing application, wherein the first detection result representation cannot generate the target application according to the user description text; responding to the first intention text as a new target application, and determining that the target application is represented by the second detection result as an existing application, and determining that the target application cannot be generated according to the user description text by the first detection result representation; and responding to the first intention text as a new target application, and determining that the second detection result representation target application is not an existing application, wherein the first detection result representation can generate the target application according to the user description text.
In some disclosed embodiments, the detection module further includes a second detection module (not shown) for determining a prediction engine to be invoked based on the second intention text; obtaining a first detection result based on whether the prediction engine belongs to a preset engine library; the preset engine library comprises a plurality of preset engines.
In some disclosed embodiments, the detection module further includes an error display module (not shown) configured to obtain error information in response to the first detection result characterizing whether the target application can be generated according to the user description text after obtaining the first detection result characterizing whether the target application can be generated according to the user description text based on at least either the first intention text or the second intention text; the error information characterizes the generation reason of the first detection result; and displaying error information, and returning to execute the step of acquiring the user description text of the target application and the subsequent steps until the target application is generated.
In some disclosed embodiments, the application generating apparatus 30 further includes a decomposing module (not shown) for acquiring a number of intention texts constituting the intention detail text after generating the intention detail text based on the keyword information and the logical relationship, and before determining a target engine that matches the intention detail text and generating the structure data of the intention detail text; the determining module 34 further includes a determining sub-module (not shown) for determining target engines that match the intention text, respectively, and generating structural data of the intention text; the generating module 35 further includes a generating sub-module (not shown) for obtaining a plurality of subtask data based on the target engine and the structure data of each intention text, respectively; based on all the subtask data, a target application is generated.
In some disclosed embodiments, the user descriptive text and the intent detail text are natural language text, and the application generating apparatus 30 further includes a text display module (not shown) for displaying the presentation page; the display page comprises a display area for displaying the text of the intention details; the intent detail text is updated in response to an edit instruction to the intent detail text in the presentation page.
Referring to fig. 4, fig. 4 is a schematic diagram of a frame of an embodiment of an electronic device 40 of the present application. The electronic device 40 comprises a memory 41 and a processor 42, the memory 41 having stored therein program instructions, the processor 42 being adapted to execute the program instructions to carry out the steps of any of the application generating method embodiments described above. Reference may be made specifically to the foregoing disclosed embodiments, and details are not repeated here. The electronic device 40 may specifically include, but is not limited to: servers, smartphones, notebook computers, tablet computers, kiosks, etc., are not limited herein.
In particular, the processor 42 is adapted to control itself and the memory 41 to implement the steps of any of the application generation method embodiments described above. The processor 42 may also be referred to as a CPU (Central Processing Unit ). The processor 42 may be an integrated circuit chip having signal processing capabilities. The processor 42 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 42 may be commonly implemented by an integrated circuit chip.
In the above-described aspect, after the electronic device 40 acquires the user description text of the target application, extracts the keyword information and the logical relationship between the respective elements involved in the target application based on the user description text for understanding the user description text, generates the intention detail text based on the keyword information and the logical relationship, determines the target engine matching with the intention detail text based on the intention detail text, generates the structure data of the intention detail text, and generates the target application based on the target engine and the structure data, so that the intention detail text is generated as effectively as possible based on the user description text, and auxiliary information required for automatically generating the target application is provided based on the intention detail text, so that the technical threshold for generating the application can be reduced as much as possible, and the generating efficiency of the target application can be improved.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an embodiment of a computer readable storage medium 50 of the present application. The computer readable storage medium 50 stores program instructions 51 executable by a processor, the program instructions 51 for implementing the steps in any of the application generation method embodiments described above.
In the above-described aspect, after the computer-readable storage medium 50 acquires the user description text of the target application, extracts the keyword information and the logical relationship between the respective elements involved in the target application based on the user description text for understanding the user description text, generates the intention detail text based on the keyword information and the logical relationship, determines the target engine matching the intention detail text based on the intention detail text, generates the structure data of the intention detail text, generates the target application based on the target engine and the structure data, thus generating the intention detail text as effective as possible based on the user description text, and provides the auxiliary information required for automatically generating the target application based on the intention detail text, so that the technical threshold for the application generation can be reduced as much as possible, and the generation efficiency of the target application can be improved.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information, and obtains independent consent of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.

Claims (11)

1. An application generation method, comprising:
Acquiring a user description text of a target application;
extracting keyword information and logic relations among elements related to the target application based on the user description text;
generating intention detail text based on the keyword information and the logic relation;
determining a target engine matched with the intention detail text and generating structural data of the intention detail text;
and generating the target application based on the target engine and the structural data.
2. The method of claim 1, wherein the generating intent detail text based on the keyword information and the logical relationship comprises:
classifying based on the keyword information to obtain a first keyword for characterizing the task type and a second keyword for characterizing the task content; the task type is to newly build or modify the target application, and the task content at least comprises specific content of each element;
obtaining a first intention text describing the task type based on the first keyword, and obtaining a second intention text describing the task content based on the second keyword and the logic relation;
Obtaining a first detection result representing whether the target application can be generated according to the user description text based on at least any one of the first intention text and the second intention text;
and determining whether to execute the generation of the intention detail text based on the first intention text and the second intention text based on whether the target application can be generated according to the user description text based on the first detection result characterization.
3. The method of claim 2, wherein the obtaining, based at least on any one of the first intent text and the second intent text, a first detection result characterizing whether the target application can be generated according to the user descriptive text comprises:
acquiring a second detection result representing whether the target application is an existing application;
and obtaining the first detection result based on the second detection result and the first intention text.
4. The method of claim 3, wherein the obtaining the first detection result based on the second detection result and the first intention text comprises at least one of:
responding to the first intention text to modify the target application, and the second detection result represents the target application as an existing application, and determining that the first detection result represents that the target application can be generated according to the user description text;
Responding to the first intention text to modify the target application, and determining that the target application is not an existing application by the second detection result representation, wherein the first detection result representation cannot generate the target application according to the user description text;
responding to the first intention text as a new target application, and the second detection result represents the target application as an existing application, and determining that the first detection result represents that the target application cannot be generated according to the user description text;
and responding to the first intention text to newly establish the target application, and determining that the target application is not the existing application by the second detection result representation, wherein the first detection result representation can generate the target application according to the user description text.
5. The method of claim 2, wherein the obtaining, based at least on any one of the first intent text and the second intent text, a first detection result characterizing whether the target application can be generated according to the user descriptive text comprises:
determining a prediction engine to be invoked based on the second intention text;
obtaining the first detection result based on whether the prediction engine belongs to a preset engine library or not; the preset engine library comprises a plurality of preset engines.
6. The method of claim 2, wherein after deriving a first detection result characterizing whether the target application can be generated in accordance with the user descriptive text based at least on either of the first intent text and the second intent text, the method further comprises:
responding to the first detection result representation that the target application cannot be generated according to the user description text, and obtaining error information; wherein the error information characterizes the generation reason of the first detection result;
and displaying the error information, and returning to the step of executing the user description text of the target application and the subsequent steps until the target application is generated.
7. The method of claim 1, wherein after the generating of the intention detail text based on the keyword information and the logical relationship, and before the determining of the target engine that matches the intention detail text and generating the structure data of the intention detail text, the method further comprises:
acquiring a plurality of intention sub-texts forming the intention detail text;
the determining a target engine that matches the intent details text and generating structural data of the intent details text includes:
Respectively determining target engines matched with the intention texts and generating structural data of the intention texts;
the generating the target application based on the target engine and the structural data includes:
obtaining a plurality of subtask data based on the target engine and the structure data of each intention text respectively;
and generating the target application based on all the subtask data.
8. The method of any of claims 1-7, wherein the user descriptive text and the intent detail text are natural language text, the method further comprising:
displaying a display page; wherein the presentation page includes a presentation area for displaying the intention detail text;
and updating the intention detail text in response to an editing instruction of the intention detail text in the presentation page.
9. An application generating apparatus, comprising:
the acquisition module is used for acquiring the user description text of the target application;
the extraction module is used for extracting the logic relationship between the keyword information and each element related to the target application based on the user description text;
The recognition module is used for generating intention detail text based on the keyword information and the logic relation;
the determining module is used for determining a target engine matched with the intention detail text and generating structural data of the intention detail text;
and the generating module is used for generating the target application based on the target engine and the structural data.
10. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the application generation method of any of claims 1 to 8.
11. A computer readable storage medium having stored thereon program instructions, which when executed by a processor implement the application generating method of any of claims 1 to 8.
CN202311748550.7A 2023-12-18 2023-12-18 Application generation method, related device, equipment and storage medium Pending CN117785149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311748550.7A CN117785149A (en) 2023-12-18 2023-12-18 Application generation method, related device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311748550.7A CN117785149A (en) 2023-12-18 2023-12-18 Application generation method, related device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117785149A true CN117785149A (en) 2024-03-29

Family

ID=90386139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311748550.7A Pending CN117785149A (en) 2023-12-18 2023-12-18 Application generation method, related device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117785149A (en)

Similar Documents

Publication Publication Date Title
CN109766540B (en) General text information extraction method and device, computer equipment and storage medium
Liang et al. Cpgan: Content-parsing generative adversarial networks for text-to-image synthesis
CN110928994B (en) Similar case retrieval method, similar case retrieval device and electronic equipment
CN110444198B (en) Retrieval method, retrieval device, computer equipment and storage medium
RU2643467C1 (en) Comparison of layout similar documents
CN113807098A (en) Model training method and device, electronic equipment and storage medium
US11869263B2 (en) Automated classification and interpretation of life science documents
TW202020691A (en) Feature word determination method and device and server
WO2018235252A1 (en) Analysis device, log analysis method, and recording medium
CN111444723A (en) Information extraction model training method and device, computer equipment and storage medium
US20220414463A1 (en) Automated troubleshooter
WO2018227930A1 (en) Method and device for intelligently prompting answers
US11699034B2 (en) Hybrid artificial intelligence system for semi-automatic patent infringement analysis
CN114647713A (en) Knowledge graph question-answering method, device and storage medium based on virtual confrontation
US9898467B1 (en) System for data normalization
US11574491B2 (en) Automated classification and interpretation of life science documents
CN116049597B (en) Pre-training method and device for multi-task model of webpage and electronic equipment
CN111859862A (en) Text data labeling method and device, storage medium and electronic device
US20200302076A1 (en) Document processing apparatus and non-transitory computer readable medium
US20200311412A1 (en) Inferring titles and sections in documents
CN111027319A (en) Method and device for analyzing natural language time words and computer equipment
CN113792143B (en) Multi-language emotion classification method, device, equipment and storage medium based on capsule network
CN112100364A (en) Text semantic understanding method and model training method, device, equipment and medium
CN116010545A (en) Data processing method, device and equipment
US11335108B2 (en) System and method to recognise characters from an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination