CN117093123A - Model generation method, device, electronic equipment and storage medium - Google Patents

Model generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117093123A
CN117093123A CN202310769470.3A CN202310769470A CN117093123A CN 117093123 A CN117093123 A CN 117093123A CN 202310769470 A CN202310769470 A CN 202310769470A CN 117093123 A CN117093123 A CN 117093123A
Authority
CN
China
Prior art keywords
component
model
instruction
target
connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310769470.3A
Other languages
Chinese (zh)
Inventor
赵鹏
欧云斌
张克飞
赵星星
陈明武
邵振军
杨卓士
王明月
樊林
郝吉芳
周希波
关蕊
文晋晓
何文
张宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202310769470.3A priority Critical patent/CN117093123A/en
Publication of CN117093123A publication Critical patent/CN117093123A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a model generation method, a model generation device, an electronic device and a storage medium, relates to the technical field of artificial intelligence, and particularly relates to the technical field of machine learning and front-end. The method comprises the following steps: responding to the component dragging instruction, and acquiring the functional text blocks of each of a plurality of model components related to the component dragging instruction; generating a plurality of connection text blocks in response to the component connection instruction, wherein the connection text blocks represent connection relations between at least two model components; and generating a target model based on the plurality of functional text blocks and the plurality of connected text blocks.

Description

Model generation method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of machine learning and front end, and specifically relates to a model generation method, a device, electronic equipment and a storage medium.
Background
With the rapid development of machine learning technology, machine learning models are widely applied to various fields of life and production. In particular, in the field of intelligent manufacturing, machine learning models play an important role. However, the code-based machine learning modeling mode has a higher modeling technical threshold, cannot meet the requirements of quick and efficient modeling, and brings great obstruction to industrial application of machine learning.
Disclosure of Invention
In view of this, the present disclosure provides a model generation method, apparatus, electronic device, readable storage medium, and computer program product.
One aspect of the present disclosure provides a model generation method, including: responding to a component dragging instruction, and acquiring respective functional text blocks of a plurality of model components related to the component dragging instruction; generating a plurality of connection text blocks in response to a component connection instruction, wherein the connection text blocks represent connection relations between at least two model components; and generating a target model based on the plurality of functional text blocks and the plurality of connection text blocks.
Another aspect of the present disclosure provides a model generating apparatus, including: the acquisition module is used for responding to the component dragging instruction and acquiring the functional text blocks of each of the plurality of model components related to the component dragging instruction; the first generation module is used for responding to the component connection instruction and generating a plurality of connection text blocks, wherein the connection text blocks represent the connection relation between at least two model components; and a second generation module for generating a target model based on the plurality of functional text blocks and the plurality of connection text blocks.
Another aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more instructions that, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions that, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program product comprising computer executable instructions which, when executed, are adapted to implement the method as described above.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the model generation methods and apparatus may be applied, according to embodiments of the present disclosure.
Fig. 2 schematically illustrates a flow chart of a model generation method according to an embodiment of the present disclosure.
FIG. 3 schematically illustrates a schematic diagram of a main interface according to an embodiment of the present disclosure.
FIG. 4 schematically illustrates a flow of generating a component pattern on a host interface according to an embodiment of the disclosure.
Fig. 5 schematically illustrates a schematic diagram of a configuration sub-interface according to an embodiment of the present disclosure.
FIG. 6 schematically illustrates a flow of generating directed edges between component patterns on a primary interface according to an embodiment of the present disclosure.
Fig. 7 schematically illustrates a schematic diagram of an operational interface according to an embodiment of the present disclosure.
Fig. 8 schematically shows a block diagram of a model generating apparatus according to an embodiment of the present disclosure.
Fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a model generation method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
With the rapid development of machine learning technology, machine learning models are widely applied to various fields of life and production. In particular, in the field of intelligent manufacturing, machine learning models play an important role.
In the related art, machine learning modeling adopts a code development-based manner. However, the code-based machine learning modeling mode has a higher modeling technical threshold, cannot meet the requirements of quick and efficient modeling, and brings great obstruction to industrial application of machine learning.
In view of this, the embodiments of the present disclosure provide a model generating method, apparatus, electronic device, and storage medium, where the model generating method models in a drag-type what-you-see-is-what-get manner, and based on a preset component library, a modeler can perform assembly modeling based on a configurable rich component, so that usability of machine learning modeling man-machine interaction is greatly improved. Specifically, the model generation method includes: responding to the component dragging instruction, and acquiring the functional text blocks of each of a plurality of model components related to the component dragging instruction; generating a plurality of connection text blocks in response to the component connection instruction, wherein the connection text blocks represent connection relations between at least two model components; and generating a target model based on the plurality of functional text blocks and the plurality of connected text blocks.
In embodiments of the present disclosure, the collection, updating, analysis, processing, use, transmission, provision, disclosure, storage, etc., of the data involved (including, but not limited to, user personal information) all comply with relevant legal regulations, are used for legal purposes, and do not violate well-known. In particular, necessary measures are taken for personal information of the user, illegal access to personal information data of the user is prevented, and personal information security, network security and national security of the user are maintained.
In embodiments of the present disclosure, the user's authorization or consent is obtained before the user's personal information is obtained or collected.
FIG. 1 schematically illustrates an exemplary system architecture to which the model generation methods and apparatus may be applied, according to embodiments of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105.
The terminal devices 101, 102, 103 may be various electronic devices with display screens including, but not limited to, smartphones, tablets, laptop and desktop computers, and the like. The user may interact with the terminal devices 101, 102, 103.
The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The server 105 may be a server that provides various services, a cloud server, or the like, and is not limited thereto.
It should be noted that, the model generating method provided by the embodiment of the present disclosure may be generally executed by the terminal devices 101, 102, 103, and accordingly, the model generating apparatus provided by the embodiment of the present disclosure may be generally disposed in the terminal devices 101, 102, 103.
For example, the user can be translated into computer instructions by the terminal devices 101, 102, 103 through man-machine interaction performed at the terminal devices 101, 102, 103, and the terminal devices 101, 102, 103 can generate the object model by executing the computer instructions.
Alternatively, the model generating method provided by the embodiment of the present disclosure may also be executed by the server 105, and accordingly, the model generating apparatus provided by the embodiment of the present disclosure may be generally disposed in the server 105.
For another example, the user may be translated by the terminal device 101, 102, 103 into computer instructions by man-machine interaction at the terminal device 101, 102, 103, the computer instructions may be sent by the terminal device 101, 102, 103 to the server 105, and the server 105 executes the computer instructions to generate the object model.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a flow chart of a model generation method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S203.
In operation S201, in response to the component drag instruction, a function text block of each of the plurality of model components associated with the component drag instruction is acquired.
In operation S202, a plurality of connection text blocks are generated in response to the component connection instruction.
In operation S203, a target model is generated based on the plurality of functional text blocks and the plurality of connection text blocks.
According to embodiments of the present disclosure, an electronic device may have a display that may present a main interface to a user. The electronic device may also be configured with an information input device such as a mouse, keyboard, etc. The electronic device may generate device instructions such as component drag instructions, component connection instructions, and the like in response to input operations by a user using a mouse, keyboard, and the like. For example, the user may perform the following drag operation: holding the left mouse button, moving from one location of the main interface to another, the electronic device may generate a component drag instruction in response to the operation. For another example, the user may perform the following connection operations: invoking the wiring tool, holding the left mouse button, moving from one pattern to another on the main interface, the electronic device may generate component connection instructions in response to the operation.
According to embodiments of the present disclosure, the functional text blocks may represent program code that implements the functions of the model component. The functional text blocks may be written in different programming languages, which may include C, JAVA, PYTHON, etc., without limitation. The input and output formats of the data in the functional text block are not limited herein, and for example, the format of the output data of the functional text block may be KV key value pairs, JSON strings, binary files, and the like.
According to embodiments of the present disclosure, the plurality of model components related to the component drag instruction may refer to a plurality of model components determined from a component library by a drag operation by a user. When developing the model component, a developer can configure the functional text block of the model component and write the functional text block into a resource library. Thus, after determining the plurality of model components, the functional text blocks for each of the plurality of model components may be determined from a repository. Alternatively, as an alternative implementation manner, the user may configure the functional text blocks of the model component by itself, and in particular, the program code input by the user may be injected into the model component by calling the injection instruction related to the model component to replace the original functional text blocks of the model component with the program code.
According to embodiments of the present disclosure, the connection text block may be program code representing a connection relationship. The connection relationship may include an input-output relationship, an "and" logic relationship, an "or" logic relationship, etc., without limitation. The connection text blocks may be written in different programming languages, which may include C, JAVA, PYTHON, etc., without limitation.
According to the embodiment of the disclosure, the plurality of functional text blocks and the plurality of connection text blocks can be assembled and spliced to generate the text block of the target model, and the target model is obtained. The resulting object model may be output as a block of text for migration to other devices or platforms for training of the model or application of the model.
According to the embodiment of the disclosure, through integrating each functional module of the model into the model component, when the model is generated, a user can select the functional module to be used in a mode of dragging the model component, and then connect a plurality of model components to determine the logic relationship among each functional module, so that a target model is generated, and through a mode of dragging and assembling model modeling based on the model component, the technical threshold of machine learning modeling can be effectively reduced, and the efficiency and convenience of machine learning modeling are improved.
The method shown in fig. 2 is further described below with reference to fig. 3-7 in conjunction with the exemplary embodiment.
FIG. 3 schematically illustrates a schematic diagram of a main interface according to an embodiment of the present disclosure.
As shown in fig. 3, the main interface is divided by functional area and may be divided into a first sub-interface 310 and a second sub-interface 320. The first sub-interface 310 may be an operation area of a user for displaying patterns of model components, wires between model components, and the like. The second sub-interface 320 may be a presentation area of the component library. The component pattern of the model component in the component library and the name, description of the component may be presented on the second sub-interface 320. For example, the name of the first model component in the component library may be a CSV reader, and the description may be reading a CSV file. The second sub-interface 320 may also have a component search control configured thereon in which a user may input a name or description of a component to be used in order to determine the model component from a library of components.
According to embodiments of the present disclosure, the component drag instruction may include drag sub-instructions for a plurality of model components, respectively. Each drag sub-instruction may contain at least cursor initial position information. The cursor initial position information may refer to position information of a cursor of the mouse on the second sub-interface 320. The model components shown on the second sub-interface 320 may each have a determined selection area, and when the point represented by the cursor initial position information falls into the selection area, the model component corresponding to the dragging sub-instruction may be determined to be the model component corresponding to the selection area.
According to an embodiment of the disclosure, the obtaining, in response to the component dragging instruction, the function text blocks of each of the plurality of model components related to the component dragging instruction may include the following operations:
in response to the component drag instruction, a plurality of model components are determined from the component library based on cursor initial position information included in the component drag instruction. Functional text blocks for each of a plurality of model components are obtained from a repository.
According to an embodiment of the present disclosure, the cursor initial position information included in the component drag instruction may be a set of cursor initial position information included in each of a plurality of drag sub-instructions included in the component drag instruction.
According to embodiments of the present disclosure, after determining a model component, a functional text block of the model component may be obtained from a repository based on information such as a name, description, etc. of the model component.
In accordance with embodiments of the present disclosure, component patterns associated with each of a plurality of model components may be rendered on a primary interface in response to component drag instructions.
According to embodiments of the present disclosure, each drag sub-instruction may also contain at least cursor dwell position information. The cursor dwell position information may refer to position information of a cursor of a mouse on the first interface 310.
According to embodiments of the present disclosure, for each model component, a first target area may be determined on the main interface based on cursor dwell position information carried by a drag sub-instruction associated with the model component. Rendering at a first target area generates a component pattern related to the model component.
According to an embodiment of the present disclosure, based on the cursor stay position information, one coordinate point on the first interface 310 may be determined, and the first target area may be determined centering on the coordinate point. In particular, the first target area may be determined based on the shape and size of the component pattern of the model component.
FIG. 4 schematically illustrates a flow of generating a component pattern on a host interface according to an embodiment of the disclosure.
As shown in fig. 4, for each drag sub-instruction, the electronic device may generate the drag sub-instruction in response to a user operation as follows: the user controls the cursor to move to a first position 401 and selects a model component from a library of components, and after determining the model component, the user can control the cursor to move to a second position 402. The position information of the first position 401 may be represented as cursor initial position information. After determining the model component, the electronic device may convert the component pattern of the model component into page elements. The position information of the second position 402 may be represented as cursor dwell position information. Based on the second location 402, a first target region may be determined at which the electronic device may render the converted page elements, such that a component pattern of the model component may be generated at the first target region.
According to embodiments of the present disclosure, as an alternative implementation, a clickable control may also be generated at a first target area using a component pattern of a model component as a control appearance to enable rendering generation of a component pattern related to the model component at the first target area. The clickable control may be used to implement page jumps. For example, the clickable control may present a third target component pattern, and the user may trigger the electronic device to generate a selection instruction for the third target component pattern included in the plurality of component images by clicking on the clickable control, and may render the generation configuration sub-interface on the main interface in response to the selection instruction for the third target component pattern included in the plurality of component images.
Fig. 5 schematically illustrates a schematic diagram of a configuration sub-interface according to an embodiment of the present disclosure.
As shown in fig. 5, one or more control elements may be included in the configuration sub-interface of the model component, which may include a selection control element, an input box control element, a switch control element, and the like. The user may configure the model component by selecting, entering, clicking on, etc., one or more control elements.
According to embodiments of the present disclosure, configuration sub-interfaces of different model components may be configured with different control elements.
According to embodiments of the present disclosure, in some embodiments, a control element included in the configuration sub-interface of the model component may further include an input box control element related to source code of the model component, and the user may replace an original functional text block of the model component by inputting program code in the input box control element to use a text block formed by the input program code.
According to embodiments of the present disclosure, user selection, input, clicking, etc. of one or more control elements may be presented on the configuration sub-interface in real-time. Specifically, the second target area may be determined from the configuration sub-interface based on the configuration item identification included in the component configuration instruction in response to the component configuration instruction. And rendering configuration information included in the configuration instruction of the display component in the second target area.
According to embodiments of the present disclosure, the configuration item identification may correspond to a control element. Based on the configuration item identification, the second target area determined from the configuration sub-interface may be an area where the control element corresponding to the configuration item identification is located.
According to embodiments of the present disclosure, the configuration information may be information generated by a user selecting, inputting, clicking, or the like, on a control element. Rendering the configuration information included in the display component configuration instruction in the second target area may be displaying the configuration information by using a corresponding control element.
According to the embodiment of the disclosure, after the user completes the operations such as selecting, inputting or clicking the control element, namely after generating the component configuration instruction, the functional text block related to the third target component pattern can be updated based on the configuration item identification and the configuration information in response to the component configuration instruction. That is, the user may synchronize the configuration change information for the model component in real time to the background service, which may update the functional text blocks of the model component related to the third target component pattern with the configuration change information.
According to an embodiment of the present disclosure, after two or more component patterns are generated on a main interface in response to a drag sub-instruction, a connection relationship between the two or more component patterns may be determined in response to a component connection instruction to generate a connection text block.
In accordance with an embodiment of the present disclosure, generating a plurality of connection text blocks in response to a component connection instruction may include the operations of:
In response to the connection instruction, a plurality of directed edges are rendered between a plurality of component patterns included in the main interface. For each directed edge, a connected text block is generated based on the directed edge and the component pattern associated with the directed edge.
According to an embodiment of the present disclosure, the component connection instruction includes a plurality of connection sub-instructions. A directed edge may be generated between two of the plurality of component patterns included in the main interface in response to each of the connection sub-instructions, respectively. In particular, in response to the connection instruction, rendering the plurality of directed edges between the plurality of component patterns included in the main interface may include the operations of:
in response to the connect sub-instruction, a first target component pattern is determined from the component patterns of each of the plurality of model components based on the cursor initial position information included in the connect sub-instruction. A second target component pattern is determined from the component patterns of each of the plurality of model components based on cursor dwell position information included in the connect sub-instruction. And rendering and generating a directed edge between the first target component pattern and the second target component pattern by taking one end of the first target component pattern as a starting point and one end of the second target component pattern as an end point.
According to an embodiment of the present disclosure, the coordinate point represented by the cursor initial position information included in the connection sub-instruction may be located in the area where the first target component pattern is located, and the coordinate point represented by the cursor stay position information included in the connection sub-instruction may be located in the area where the second target component pattern is located.
According to an embodiment of the present disclosure, the component patterns associated with the directed edge are a first target component image and a second target component pattern.
FIG. 6 schematically illustrates a flow of generating directed edges between component patterns on a primary interface according to an embodiment of the present disclosure.
As shown in fig. 6, for each connection sub-instruction, the electronic device may generate the connection sub-instruction in response to a user operation as follows: the user controls the cursor to move to the third position 601 to select the first target component pattern, and then controls the cursor to move to the fourth position 602 to select the second target component pattern. Upon completion of the selection, the electronic device may render a directed edge 603 between the first target component pattern and the second target component pattern at the host interface.
According to an embodiment of the present disclosure, the third location 601 may be any location within the area where the first target component pattern is located, and the fourth location 602 may be any location within the area where the second target component pattern is located.
According to an embodiment of the present disclosure, in the generation of the directed edge 603, a pattern representing a start point of a data stream may be generated in a first target component pattern and a pattern representing an end point of the data stream may be generated in a second target component pattern according to the direction of the directed edge 603.
According to embodiments of the present disclosure, after the target model is generated, a text block of the target model may be exported and imported into other electronic devices for training of the model or application of the model in the other electronic devices. Alternatively, the training of the model or the application of the model may be performed directly on the original electronic device. Specifically, the first form may be generated in response to the data input instruction based on the data to be processed included in the data input instruction. Rendering and displaying a first form in a first area of the operation interface.
According to the embodiment of the disclosure, after receiving the data input instruction, the display interface of the electronic device can be switched from the main interface to the operation interface. The operation interface can be configured with a control element of a button type, and a user can control the display interface to be switched back to the main interface by the operation interface by clicking the control element.
According to the embodiment of the disclosure, the number of rows and columns of the first form may be generated according to the data to be processed, which is not limited herein.
According to the embodiment of the disclosure, after receiving the data to be processed, the data to be processed can be input into the target model in response to the data processing instruction, so as to obtain model output data. A second form is generated based on the model output data. Rendering and displaying a second form in a second area of the operation interface.
According to the embodiment of the disclosure, another button control element can be configured on the operation interface, and the electronic device can generate a data processing instruction when detecting that the user clicks the other button control element.
According to an embodiment of the disclosure, the data to be processed is input into the target model, that is, the data to be processed is taken as an input parameter, the data to be processed is input into a text block of the target model, and the output parameter of the text block is model output data.
According to embodiments of the present disclosure, the model output data may be represented as a matrix, and the number of rows and columns of the second form may be determined based on the number of rows and columns of the matrix.
Fig. 7 schematically illustrates a schematic diagram of an operational interface according to an embodiment of the present disclosure.
As shown in fig. 7, the operational interface may include a first region 710 and a second region 720. In response to the data input instruction, a first form may be generated based on the data to be processed included in the data input instruction, and the first form may be rendered for presentation in the first region 710. In response to the data processing instruction, after performing model operation on the data to be processed, the obtained model data may be used to generate a second form, and the second form may be rendered and displayed in the second area 720.
According to an embodiment of the present disclosure, the operation interface may further include a third region 730, and the third region 730 may be used to present text blocks of the target model.
According to an embodiment of the present disclosure, as an alternative implementation, each model component may also perform model operations separately. In particular, the data input instruction may include data to be processed of the model component. In response to the data processing instruction, the data to be processed can be input into the functional text block of the model component as an input parameter, the output parameter of the functional text block is taken as output data, and a second form is generated and displayed based on the output data.
According to embodiments of the present disclosure, input data or output data of each model component may have different data format requirements, and data formats may be exchanged using preset multi-modal components to enable data exchange between each model component having different data format requirements. Specifically, for first component data of a first target model component included in the target model, based on a data format requirement of a second target model component, performing data format conversion on the first component data by utilizing a multi-mode component exchange data format to obtain second component data. The second component data is input to the second object model component.
According to an embodiment of the present disclosure, the second target model component is a downstream model component of the first target model component.
According to embodiments of the present disclosure, the multimodal component exchange data format may support at least the transmission of the following three types of information. One is a KV key value pair representing an attribute; secondly, json character strings are used for transmitting data similar to data and data frames; thirdly, binary files, transmission files, models and other binary format data.
Fig. 8 schematically shows a block diagram of a model generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, the model generating apparatus 800 includes an acquisition module 810, a first generating module 820, and a second generating module 830.
An obtaining module 810 is configured to obtain, in response to the component dragging instruction, a function text block of each of the plurality of model components associated with the component dragging instruction.
A first generation module 820 for generating a plurality of connection text blocks in response to the component connection instruction, wherein the connection text blocks represent connection relations between at least two model components.
The second generating module 830 is configured to generate a target model based on the plurality of functional text blocks and the plurality of connection text blocks.
According to an embodiment of the present disclosure, the acquisition module 810 includes a first acquisition unit and a second acquisition unit.
The first acquisition unit is used for responding to the component dragging instruction and determining a plurality of model components from the component library based on the cursor initial position information included in the component dragging instruction.
And the second acquisition unit is used for acquiring the functional text blocks of each of the plurality of model components from the resource library.
According to an embodiment of the present disclosure, the model generating device 800 further comprises a third generating module.
And the third generation module is used for rendering and generating component patterns related to each of the plurality of model components on the main interface in response to the component dragging instruction.
According to an embodiment of the present disclosure, the component drag instruction includes drag sub-instructions associated with each of the plurality of model components.
According to an embodiment of the present disclosure, the third generation module includes a first generation unit and a second generation unit.
The first generating unit is used for responding to the dragging sub-instruction related to the model component for each model component, and determining a first target area on the main interface based on cursor stay position information carried by the dragging sub-request.
And the second generating unit is used for rendering and generating the component pattern related to the model component at the first target area.
According to an embodiment of the present disclosure, the first generation module 820 includes a third generation unit and a fourth generation unit.
And a third generating unit for rendering and generating a plurality of directed edges among a plurality of component patterns included in the main interface in response to the connection instruction.
And a fourth generation unit for generating, for each directed edge, a connection text block based on the directed edge and the component pattern associated with the directed edge.
According to an embodiment of the present disclosure, the component connection instruction includes a plurality of connection sub-instructions.
According to an embodiment of the present disclosure, the third generation unit includes a first generation subunit, a second generation subunit, and a third generation subunit.
And the first generation subunit is used for responding to the connection sub-instruction and determining a first target component pattern from the component patterns of the model components based on the cursor initial position information included by the connection sub-instruction.
And a second generation subunit, configured to determine a second target component pattern from the component patterns of each of the plurality of model components based on the cursor rest position information included in the connection sub-instruction.
And the third generation subunit is used for rendering and generating a directed edge between the first target component pattern and the second target component pattern by taking one end of the first target component pattern as a starting point and one end of the second target component pattern as an end point.
According to an embodiment of the present disclosure, the model generating device 800 further comprises a fourth generating module.
And a fourth generation module for rendering and generating a configuration sub-interface on the main interface in response to a selection instruction of a third target component pattern included in the plurality of component patterns.
According to an embodiment of the present disclosure, the model generating device 800 further includes a determining module and a first display module.
And the determining module is used for responding to the component configuration instruction and determining a second target area from the configuration sub-interface based on the configuration item identification included in the component configuration instruction.
The first display module is used for rendering configuration information included in the display component configuration instruction in the second target area.
According to an embodiment of the present disclosure, the model generating apparatus 800 further includes an updating module.
And the updating module is used for responding to the component configuration instruction and updating the functional text block related to the third target component pattern based on the configuration item identification and the configuration information.
According to an embodiment of the present disclosure, the model generating device 800 further includes a fifth generating module and a second displaying module.
And the fifth generation module is used for responding to the data input instruction and generating a first form based on the data to be processed included in the data input instruction.
And the second display module is used for rendering and displaying the first form in the first area of the operation interface.
According to an embodiment of the present disclosure, the model generating device 800 further includes an input module, a sixth generating module, and a third presentation module.
And the input module is used for responding to the data processing instruction, inputting the data to be processed into the target model and obtaining model output data.
And a sixth generation module for generating a second form based on the model output data.
And the third display module is used for rendering and displaying a second form in a second area of the operation interface.
According to an embodiment of the present disclosure, an input module includes a first input unit and a second input unit.
The first input unit is used for converting the data format of the first component data by utilizing the multi-mode component exchange data format based on the data format requirement of the second target model component for the first component data of the first target model component included in the target model to obtain second component data, wherein the second target model component is a downstream model component of the first target model component.
And a second input unit for inputting second component data into the second object model component.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the acquisition module 810, the first generation module 820, and the second generation module 830 may be combined in one module/unit/sub-unit or any of them may be split into multiple modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the acquisition module 810, the first generation module 820, and the second generation module 830 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware, such as any other reasonable way of integrating or packaging circuitry, or in any one of or a suitable combination of any of the three. Alternatively, at least one of the acquisition module 810, the first generation module 820, and the second generation module 830 may be at least partially implemented as computer program modules that, when executed, perform the corresponding functions.
It should be noted that, in the embodiment of the present disclosure, the model generating device portion corresponds to the model generating method portion in the embodiment of the present disclosure, and the description of the model generating device portion specifically refers to the model generating method portion and is not described herein.
Fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a model generation method according to an embodiment of the disclosure. The electronic device shown in fig. 9 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 9, a computer electronic device 900 according to an embodiment of the present disclosure includes a processor 901 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. The processor 901 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 901 may also include on-board memory for caching purposes. Processor 901 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. The processor 901 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the program may be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the electronic device 900 may also include an input/output (I/O) interface 905, the input/output (I/O) interface 905 also being connected to the bus 904. The electronic device 900 may also include one or more of the following components connected to an input/output (I/O) interface 905: an input section 906 including a keyboard, a mouse, and the like; an output portion 907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to an input/output (I/O) interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 902 and/or RAM 903 and = or one or more memories other than ROM 902 and RAM 903 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program comprising program code for performing the methods provided by the embodiments of the present disclosure, the program code for causing an electronic device to implement the model generation methods provided by the embodiments of the present disclosure when the computer program product is run on the electronic device.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, via communication portion 909, and/or installed from removable medium 911. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (16)

1. A model generation method, comprising:
responding to a component dragging instruction, and acquiring respective functional text blocks of a plurality of model components related to the component dragging instruction;
generating a plurality of connection text blocks in response to a component connection instruction, wherein the connection text blocks represent connection relations between at least two model components; and
and generating a target model based on a plurality of the functional text blocks and the plurality of the connection text blocks.
2. The method of claim 1, wherein the obtaining, in response to a component drag instruction, a function text block for each of a plurality of model components associated with the component drag instruction, comprises:
Determining the plurality of model components from a component library based on cursor initial position information included in the component dragging instruction in response to the component dragging instruction; and
and acquiring the functional text blocks of each of the plurality of model components from a resource library.
3. The method of claim 1, further comprising:
in response to the component drag instruction, rendering on a main interface generates a component pattern associated with each of the plurality of model components.
4. The method of claim 3, wherein the component drag instruction comprises a drag sub-instruction associated with each of the plurality of model components;
wherein the rendering, in response to the component drag instruction, on a main interface generates a component pattern associated with each of the plurality of model components, comprising:
for each model component, responding to a dragging sub-instruction related to the model component, and determining a first target area on the main interface based on cursor stay position information carried by the dragging sub-request; and
rendering at a first target area generates a component pattern related to the model component.
5. The method of claim 3, wherein the generating a plurality of connection text blocks in response to a component connection instruction comprises:
In response to the connection instruction, rendering among a plurality of component patterns included in the main interface to generate a plurality of directed edges; and
for each of the directed edges, generating the connected text block based on the directed edge and a component pattern associated with the directed edge.
6. The method of claim 5, wherein the component connection instruction comprises a plurality of connection sub-instructions;
wherein, in response to the connection instruction, rendering between a plurality of component patterns included in the main interface generates a plurality of directed edges, including:
determining a first target component pattern from the component patterns of each of the plurality of model components based on cursor initial position information included in the connection sub-instruction in response to the connection sub-instruction;
determining a second target component pattern from the component patterns of each of the plurality of model components based on cursor rest position information included in the connection sub-instruction; and
and rendering and generating the directed edge between the first target component pattern and the second target component pattern by taking one end of the first target component pattern as a starting point and one end of the second target component pattern as an end point.
7. A method according to claim 3, further comprising:
and in response to a selection instruction for a third target component pattern included by the plurality of component patterns, rendering and generating a configuration sub-interface on the main interface.
8. The method of claim 7, further comprising:
in response to a component configuration instruction, determining a second target area from the configuration sub-interface based on a configuration item identifier included in the component configuration instruction; and
and rendering and displaying configuration information included in the component configuration instruction in the second target area.
9. The method of claim 8, further comprising:
and updating a function text block related to the third target component pattern based on the configuration item identification and the configuration information in response to the component configuration instruction.
10. The method of claim 1, further comprising:
responding to a data input instruction, and generating a first form based on data to be processed included in the data input instruction; and
and rendering and displaying the first form in a first area of the operation interface.
11. The method of claim 10, further comprising:
responding to a data processing instruction, inputting the data to be processed into the target model to obtain model output data;
Generating a second form based on the model output data; and
and rendering and displaying the second form in a second area of the operation interface.
12. The method of claim 11, wherein the inputting the data to be processed into the target model results in model output data, comprising:
for first component data of a first target model component included in the target model, performing data format conversion on the first component data by utilizing a multi-mode component exchange data format based on the data format requirement of a second target model component to obtain second component data, wherein the second target model component is a downstream model component of the first target model component; and
the second component data is entered into the second object model component.
13. A model generation apparatus comprising:
the acquisition module is used for responding to the component dragging instruction and acquiring the functional text blocks of each of the plurality of model components related to the component dragging instruction;
the first generation module is used for responding to the component connection instruction and generating a plurality of connection text blocks, wherein the connection text blocks represent the connection relation between at least two model components; and
And the second generation module is used for generating a target model based on a plurality of the functional text blocks and the plurality of the connection text blocks.
14. An electronic device, comprising:
one or more processors;
a memory for storing one or more instructions,
wherein the one or more instructions, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 12.
15. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to implement the method of any of claims 1 to 12.
16. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 12 when executed.
CN202310769470.3A 2023-06-27 2023-06-27 Model generation method, device, electronic equipment and storage medium Pending CN117093123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310769470.3A CN117093123A (en) 2023-06-27 2023-06-27 Model generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310769470.3A CN117093123A (en) 2023-06-27 2023-06-27 Model generation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117093123A true CN117093123A (en) 2023-11-21

Family

ID=88774200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310769470.3A Pending CN117093123A (en) 2023-06-27 2023-06-27 Model generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117093123A (en)

Similar Documents

Publication Publication Date Title
CN109542399B (en) Software development method and device, terminal equipment and computer readable storage medium
US10579344B2 (en) Converting visual diagrams into code
JP6944548B2 (en) Automatic code generation
WO2021017735A1 (en) Smart contract formal verification method, electronic apparatus and storage medium
EP3304286B1 (en) Data binding dependency analysis
US10318595B2 (en) Analytics based on pipes programming model
US11048485B2 (en) User interface code re-use based upon machine learning of design documents
CN108170425B (en) Program code modification method and device and terminal equipment
KR20170057264A (en) Code development tool with multi-context intelligent assistance
CN112214210A (en) Logistics business rule engine and configuration method, device, equipment and storage medium thereof
CN111523021A (en) Information processing system and execution method thereof
US10691429B2 (en) Converting whiteboard images to personalized wireframes
CN112506503B (en) Programming method, device, terminal equipment and storage medium
CN112148276A (en) Visual programming for deep learning
US20190102148A1 (en) Development Environment for Real-Time Application Development
CN117093123A (en) Model generation method, device, electronic equipment and storage medium
CN111027196A (en) Simulation analysis task processing method and device for power equipment and storage medium
CN113032003B (en) Development file export method, development file export device, electronic equipment and computer storage medium
CN115408003A (en) Data access method and device in virtual machine, electronic equipment and medium
CN114185618A (en) Business tool configuration method and device, computer equipment and storage medium
CN116416194A (en) Processing method, device and system of image modeling platform
CN111860476A (en) Method and system for recognizing images
CN113392014A (en) Test case generation method and device, electronic equipment and medium
CN115546356A (en) Animation generation method and device, computer equipment and storage medium
CN115357447A (en) Configuration debugging method and device for coarse-grained reconfigurable processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination