CN118153129A - Workpiece three-dimensional model generation method, device and medium based on fine tuning large model - Google Patents
Workpiece three-dimensional model generation method, device and medium based on fine tuning large model Download PDFInfo
- Publication number
- CN118153129A CN118153129A CN202410558221.4A CN202410558221A CN118153129A CN 118153129 A CN118153129 A CN 118153129A CN 202410558221 A CN202410558221 A CN 202410558221A CN 118153129 A CN118153129 A CN 118153129A
- Authority
- CN
- China
- Prior art keywords
- instruction
- model
- generation
- workpiece
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000012549 training Methods 0.000 claims description 63
- 238000013519 translation Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000012216 screening Methods 0.000 claims description 6
- 230000001537 neural effect Effects 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000009966 trimming Methods 0.000 claims 2
- 241001632422 Radiola linoides Species 0.000 claims 1
- 238000013461 design Methods 0.000 description 25
- 238000011960 computer-aided design Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 235000002198 Annona diversifolia Nutrition 0.000 description 4
- 241000282842 Lama glama Species 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 241001632427 Radiola Species 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a workpiece three-dimensional model generation method, device and medium based on a fine tuning large model, and relates to the technical field of computers. According to the method, a generating instruction of a target workpiece input on a control interface is acquired through a plug-in client, trigger operation of a user on a submitting control is responded, request generating information is sent to a cloud server, the cloud server obtains input parameters corresponding to the generating instruction according to the request generating information, the input parameters corresponding to the generating instruction are input to a workpiece generating model, a modeling application program operates target codes to generate a target three-dimensional model of the target workpiece, and the target three-dimensional model is rendered and displayed.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a workpiece three-dimensional model generation method, device and medium based on a fine tuning large model.
Background
Computer-aided design (CAD) modeling is a technology by which designers perform model design by means of Computer programs, and the modeling technology can greatly improve the working efficiency of the designers and has been widely applied to industries such as machinery, electronics, aerospace, chemical industry, construction and the like.
At present, CAD automatic modeling methods are mainly classified into the following categories: parameterized modeling method, feature-based method, shape generation algorithm and evolution algorithm, fusion of solid modeling and boundary representation, and application of machine learning.
However, the existing automatic modeling methods of various CAD can only be applied to specific fields, cannot be applied to various fields or scenes, and have the problem of low intelligence and applicability.
Disclosure of Invention
The invention aims to provide a workpiece three-dimensional model generating method, device and medium based on a fine tuning large model so as to solve the technical problems in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the application is as follows:
In a first aspect, an embodiment of the present application provides a workpiece three-dimensional model generating method based on a fine tuning large model, which is applied to a workpiece three-dimensional model generating system, where the workpiece three-dimensional model generating system includes: the cloud modeling method comprises the steps of running a modeling application program and a plug-in client of the modeling application program on the terminal equipment, and the cloud server, wherein the method comprises the following steps:
The plug-in client obtains a generation instruction of a target workpiece input by a user on a control interface of the plug-in client, wherein the generation instruction comprises: the generation instruction is used for indicating characteristic information of the target workpiece;
The plug-in client responds to the triggering operation of a user on a submitting control in the control interface and sends request generation information to the cloud server, wherein the request generation information comprises the generation instruction;
The cloud server obtains input parameters corresponding to the generation instructions according to the request generation information, inputs the input parameters to a workpiece generation model deployed on the cloud server, analyzes and identifies the input parameters corresponding to the generation instructions by the workpiece generation model, generates target codes executable by the modeling application program, and sends the target codes to the plug-in client, wherein the workpiece generation model is obtained by training based on sample data of a plurality of fields in advance;
The plug-in client transmits the target code to the modeling application program, and the modeling application program runs the target code, generates a target three-dimensional model of the target workpiece and renders and displays the target three-dimensional model of the target workpiece.
Optionally, the method further comprises:
the plug-in client acquires an update instruction aiming at the target three-dimensional model and input by the user on the control interface, wherein the update instruction is used for indicating information to be updated of the target workpiece;
The plug-in client responds to the triggering operation of a user on the update control in the control interface, generates request update information according to the identification of the generation instruction and the information to be updated, and sends the request update information to the cloud server, wherein the request update information comprises the identification of the generation instruction and the information to be updated;
The cloud server acquires the identification of the generation instruction and the information to be updated according to the request updating information;
The cloud server updates the input parameters corresponding to the generation instruction according to the identification of the generation instruction and the information to be updated to obtain updated parameters;
the cloud server inputs the updated parameters into the workpiece generation model to obtain updated codes, and the updated codes are sent to the plug-in client;
The plug-in client transmits the updated codes to the modeling application program, the modeling application program runs the updated codes, generates an updated three-dimensional model of the target workpiece, and renders and displays the updated three-dimensional model.
Optionally, the workpiece generating model is obtained by the following steps:
Obtaining an initial code sample set, the initial code sample set comprising: a plurality of unlabeled code samples for a plurality of domains;
performing unsupervised training on a pre-selected basic large model by using the initial code sample set to obtain an initial model;
obtaining a fine tuning dataset comprising: a plurality of instruction samples in a plurality of fields and actual codes corresponding to the instruction samples;
and performing full-parameter fine adjustment on the initial model based on the fine adjustment data set to obtain the workpiece generation model.
Optionally, performing unsupervised training on the pre-selected basic large model by using the initial code sample set to obtain an initial model, including:
Performing rotary position coding on a plurality of unlabeled code samples by adopting a pre-selected coding function to obtain a position coding matrix;
And carrying out multi-round iterative training on the basic large model according to the position coding matrix until the basic large model obtained by training converges to obtain an initial model.
Optionally, the pre-selected encoding function is neural tangent nuclear rotation position encoding (neural tangentkernel-Rotary Position Embedding, abbreviated as NTK-RoPE).
Optionally, the performing full parameter fine tuning on the initial model based on the fine tuning data set to obtain the workpiece generating model includes:
Randomly selecting a first seed data pair and a second seed data pair from the fine tuning data set, and generating a back-translation model based on the first seed data pair; wherein the first seed data pair comprises: the second seed data pair comprises a plurality of unlabeled code samples;
Inputting each code sample in the second seed data pair into the back translation model to obtain a prediction instruction corresponding to each code sample;
Obtaining an instruction set based on the second seed data pair and the prediction instructions corresponding to the code samples;
scoring the various sub-data pairs in the instruction set by using the initial model to obtain quality scores of the various sub-data pairs;
Screening the instruction set according to the quality scores of various sub-data pairs in the instruction set to obtain a high-quality training set;
And carrying out iterative training on the initial model by using the high-quality training set, updating weight parameters of the initial model after each iteration until the updated initial model meets a preset convergence condition, and taking the initial model meeting the convergence condition as the workpiece generation model.
Optionally, the scoring the various sub-data pairs in the instruction set using the initial model to obtain quality scores of the various sub-data pairs, including:
Inputting a prediction instruction corresponding to the code sample in each sub-data pair in the instruction set to the initial model to obtain a prediction code of the code sample in each sub-data pair in the instruction set;
And determining the similarity between the predicted codes of the code samples in the various sub-data pairs in the instruction set and the code samples, and taking the similarity as the quality scores of the various sub-data pairs in the instruction set.
Optionally, the screening the instruction set according to the quality scores of the various sub-data pairs in the instruction set to obtain a high-quality training set includes:
And traversing each seed data pair in the instruction set, and if the quality score of a third seed data pair is smaller than a preset value, removing the third seed data pair from the instruction set until all seed data pairs in the instruction set are traversed, so as to obtain a high-quality training sample.
Optionally, the step of sending, by the plug-in client, request generation information to the cloud server in response to a triggering operation of a user on a submit control in the manipulation interface includes:
The plug-in client compresses the generated instruction to obtain a compressed instruction;
and encapsulating the compressed instruction into the request generation information by the plug-in client.
Optionally, the cloud server obtains the input parameters corresponding to the generation instruction according to the request generation information, including:
The cloud server extracts the compressed instruction from the request generation information;
decompressing the compressed instruction by the cloud server to obtain the generated instruction;
and the cloud server analyzes the generation instruction according to the data type of the generation instruction to obtain the input parameters corresponding to the generation instruction.
In a second aspect, an embodiment of the present application further provides a terminal device, including: a processor, a storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over a bus when the terminal device is running, the processor executing the machine-readable instructions to perform the steps of the method as provided in the first aspect, and a bus.
In a third aspect, embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method as provided in the first aspect.
The beneficial effects of the application are as follows:
The application provides a workpiece three-dimensional model generating method, equipment and medium based on a fine tuning large model, the method is that a plug-in client acquires a generating instruction of a target workpiece input by a user on a control interface, responds to triggering operation of a submitting control in the control interface by the user, sends request generating information containing the generating instruction of the target workpiece to a cloud server, the cloud server obtains input parameters corresponding to the generating instruction according to the request generating information, inputs the input parameters corresponding to the generating instruction to a workpiece generating model deployed in the cloud server, at the moment, an application program runs a target code, generates a target three-dimensional model of the target workpiece, and renders and displays the target three-dimensional model, namely the method can obtain the workpiece generating model based on the generating instruction of the target object in different fields and sample data training based on a plurality of fields in advance, and can generate the three-dimensional model of the target object.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a three-dimensional model generating system for a workpiece according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
FIG. 3 is a first schematic interface diagram of a plug-in client according to an embodiment of the present application;
FIG. 4 is a flow chart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
Fig. 5 is a second interface schematic diagram of a plug-in client according to an embodiment of the present application;
FIG. 6 is a flowchart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
FIG. 7 is a flow chart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
FIG. 8 is a flow chart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
FIG. 9 is a flow chart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
FIG. 10 is a flowchart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
FIG. 11 is a flow chart of another method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
At present, in the field of CAD automatic modeling, the following major categories are mainly classified: parameterized modeling technology, feature-based methods, shape generation algorithms and evolution algorithms, fusion of solid modeling and boundary representation, and application of machine learning. Together, these approaches have driven the continued advancement of CAD automated modeling techniques.
Specifically, (1) parametric modeling techniques can be cumbersome in handling complex geometries and non-linear relationships, requiring careful selection and adjustment of parameters by a designer to ensure accuracy of the model; (2) The feature-based method may not be robust enough in the face of noise or largely deformed data, and at the same time highly depends on accurate feature extraction, limiting its adaptability in complex design scenarios; (3) The computational complexity of shape generation algorithms and evolution algorithms may lead to longer computation times, especially when dealing with large-scale or highly complex designs, while the results of the evolution algorithms are affected by initial conditions, requiring more parameter adjustment and optimization; (4) The fusion of solid modeling with boundary representation has potential in theory, but may face difficult mathematical and computational challenges in practical applications, especially in balancing the accuracy and practicality of the model. That is, there is no explicit mainstream method in the CAD automation modeling field, but depends on specific design requirements and application scenarios. Various methods have their areas of applicability and advantages. Different engineering projects, design tasks, or industries may be more inclined to use a particular type of CAD automatic modeling method.
Therefore, the current automatic modeling methods of various CAD can only be applied to specific fields, cannot be applied to various fields or scenes, and have the problem of low intelligence and applicability.
Based on the problems, the application provides a workpiece three-dimensional model generating method based on a fine tuning large model, which is characterized in that a plug-in client acquires a generating instruction of a target workpiece input by a user on a control interface, responds to triggering operation of the user on a submitting control in the control interface, sends request generating information containing the generating instruction of the target workpiece to a cloud server, the cloud server obtains input parameters corresponding to the generating instruction according to the request generating information, inputs the input parameters corresponding to the generating instruction to a workpiece generating model deployed in the cloud server, at the moment, a modeling application program runs target codes to generate the target three-dimensional model of the target workpiece, and renders and displays the target three-dimensional model, namely the method can obtain the workpiece generating model based on the generating instruction of the target object in different fields and sample data training based on a plurality of fields in advance, can generate the three-dimensional model of the target object, and the scheme can better adapt to the requirements of diversity and complexity in different fields and industry designs, so that the current automatic CAD modeling method can only be applied to specific fields or can not be applied to various fields, and has low applicability and low applicability.
The architecture of the workpiece three-dimensional model generating system provided by the application will be described in detail through the following embodiments.
Referring to fig. 1, a schematic architecture diagram of a three-dimensional model generation system for a workpiece is shown, as shown in fig. 1, the system includes: terminal equipment and cloud server. The terminal equipment and the cloud server can be in communication connection through a wired network or a wireless network.
For example, the terminal device may be an electronic device with a data processing function, such as a smart phone, a computer, a tablet computer, or the like.
The cloud server may be a public cloud server or a private cloud server.
With continued reference to FIG. 1, a modeling application and a plug-in client for the modeling application are running on a terminal device. For example, the modeling application may be a three-dimensional drawing application such as AutoLisp, solidWorks, CAD.
The plug-in client of the modeling application program is a plug-in interface of the modeling application program which is self-developed by a user, and can realize information interaction between the modeling application program and a work piece generation model deployed on the cloud server, so that the plug-in client can be seamlessly connected with the modeling application program, acquire a generation instruction of a certain work piece input by the user at the user interface of the plug-in client, and transmit request generation information containing the generation instruction of the work piece to the cloud server.
The cloud server obtains input parameters corresponding to the generation instructions according to the request generation information, inputs the input parameters corresponding to the generation instructions into the workpiece generation model, analyzes and identifies the input parameters corresponding to the generation instructions by the workpiece generation model to obtain target codes, returns the target codes to the plug-in client, transmits the target codes to the modeling application program, and runs the target codes by the modeling application program to generate a target three-dimensional model of the target workpiece, and renders and displays the target three-dimensional model.
It will be appreciated that the configuration shown in FIG. 1 is merely illustrative, and that the workpiece three-dimensional model generation system may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The implementation principle and the corresponding generated beneficial effects of the three-dimensional model generation method of the workpiece based on the fine tuning large model provided by the application are described in the following through a plurality of specific embodiments.
FIG. 2 is a schematic flow chart of a method for generating a three-dimensional model of a workpiece based on a fine tuning large model according to an embodiment of the present application; the workpiece three-dimensional model generation system based on the fine tuning large model is applied to the workpiece three-dimensional model generation system based on the fine tuning large model provided by the embodiment of the figure 1. It should be understood that in other embodiments, the order of some steps in the workpiece three-dimensional model generating method based on the fine tuning large model may be interchanged according to actual needs, or some steps may be omitted or deleted. As shown in fig. 2, the method includes:
S201, the plug-in client acquires a generation instruction of a target workpiece input by a user on a control interface of the plug-in client.
Optionally, the plug-in client with the modeling application and the modeling application running on the terminal device is provided, that is, information interaction between the modeling application and the cloud server is realized through the plug-in client, the plug-in client can keep relatively light weight, dependence on user computing resources is reduced, and local computing burden is reduced.
Wherein generating the instruction comprises: picture data, text data, audio data or video data, and generating instructions for indicating characteristic information of the target workpiece. It is understood that the characteristic information of the target workpiece may refer to the size, color, number, etc. of the target workpiece. For example, the target workpiece is a bolt, the characteristic information of the target workpiece is that the bolt is 8cm long and 50 threads are formed on the bolt, namely, a generation instruction input by a user is required to generate a bolt with the length of 8cm and 50 threads.
Optionally, the design requirement input by the user on the control interface of the plug-in client, namely, the generation instruction aiming at the target workpiece is input, can be picture data, text data, audio data or video data and the like, namely, the scheme supports diversified input modes, and the user can select any mode of text, picture, voice or video and the like to input according to the requirement, so that a more flexible use mode is provided, the working efficiency and experience of the user are improved, and meanwhile, the learning cost is reduced.
For example, referring to fig. 3, a schematic diagram of a manipulation interface of a plug-in client is shown, and design requirements input by a user at the manipulation interface are as follows: the text data, namely a bolt with the length of 8cm and 50 threads, can also directly upload a picture (or video and audio) of the bolt with the length of 8cm and 50 threads from a specified path. Therefore, the user can express the design intent through visual natural language or voice input, does not need to deeply understand the complex operation steps of CAD software, and the user experience is obviously improved.
S202, the plug-in client responds to triggering operation of a user on a submitting control in the control interface, and sends request generation information to the cloud server.
Wherein the request generation information includes a generation instruction. Illustratively, the plug-in client may encrypt, compress, encapsulate the design requirement input by the user, and obtain the request generation information, so as to improve the efficiency and security of data transmission.
With continued reference to fig. 3, after confirming that the input design requirement is correct, the user may execute a triggering operation on the submit control in the control interface, and the terminal device responds to the triggering operation of the user on the submit control, and sends the request generation information including the generation instruction to the cloud server.
S203, the cloud server obtains input parameters corresponding to the generation instructions according to the request generation information, inputs the input parameters to a workpiece generation model deployed on the cloud server, analyzes and identifies the input parameters corresponding to the generation instructions by the workpiece generation model, generates target codes executable by the modeling application program, and sends the target codes to the plug-in client.
The workpiece generation model is obtained by training based on sample data of a plurality of fields in advance. For example, the multi-domain sample data may include: the workpiece generation model is obtained by training sample data in various different fields, so that the workpiece generation model can process complex design tasks in different fields, and the accuracy and the adaptability of the model for generating the target object are improved.
S204, the plug-in client transmits the target code to a modeling application program, the modeling application program runs the target code, generates a target three-dimensional model of the target workpiece, and renders and displays the target three-dimensional model of the target workpiece.
In this embodiment, for example, the cloud server inputs the input parameters corresponding to the generating instruction to the workpiece generating model, the workpiece generating model analyzes and identifies the input parameters corresponding to the generating instruction, obtains the target code output by the workpiece generating model and executable by the modeling application program, sends the target code to the plug-in client, the plug-in client transmits the target code to the modeling application program, the user executes clicking operation on the operation control on the plug-in client, the modeling application program operates the target code, generates the target three-dimensional model of the target workpiece, and renders and displays the target three-dimensional model, so that the three-dimensional model meeting the design requirements of the user can be generated with high quality and high efficiency according to the generating instruction input by the user.
Optionally, in the scheme, the generation process and the result of the target three-dimensional model are embedded into the modeling application program, so that the user does not need to switch the application program, and a seamless flow from the user requirement to the model generation is realized. The working efficiency and the design consistency are improved.
For example, 1. The input generation instruction is: a bolt with a length of 8cm and 50 threads;
The object code output by the workpiece generation model is as follows:
(defun c:create-bolt ()
(setq bolt _Length 8.0);// set bolt length of 8cm
(Setq num _threads 50);// set the number of threads to 50
(SETQ THREAD _length (/ bolt_length 2.0))/(calculating the length of each thread, assuming that each thread length is half the bolt length)
(Command "_LINE" _non "(list 0 0) (list 0 bolt_length))// create bolt body
(setq start_point (list 0 0 0))
(setq end_point (list 0 0 thread_length))
(setq current_point end_point)
(SETQ THREAD _spacing (/ bolt_length num_threads));// calculate the spacing between threads
(repeat num_threads
(Command "_line" "_non" start_point end_point);// create a thread segment
(setq start_point current_point)
(Setq current _point (mapcar' +current_point (list 00 thread_spacing)));// move to the start of the next thread
(Setq end _point (mapcar' +current_point (list 00 thread_length))))// calculate the end point of the next thread segment
)
)
(C: create-bolt);// call function creation bolt
The scheme shows stronger universality when processing different types of design tasks, thereby better meeting the requirements of diversity and complexity in different fields and industry designs, and solving the problems that the current various CAD automatic modeling methods can only be applied to specific fields, cannot be applied to various fields or scenes and have lower intelligence and applicability.
In summary, the embodiment of the application provides a workpiece three-dimensional model generating method based on a fine tuning large model, which is to obtain a generating instruction of a target workpiece input by a user on a control interface through a plug-in client, respond to the triggering operation of the user on a submitting control in the control interface, send request generating information containing the generating instruction of the target workpiece to a cloud server, obtain input parameters corresponding to the generating instruction according to the request generating information, input the input parameters corresponding to the generating instruction to a workpiece generating model deployed on the cloud server, at this time, a modeling application program runs a target code to generate a target three-dimensional model of the target workpiece, and render and display the target three-dimensional model, namely, the method can obtain the workpiece generating model based on the generating instruction of the target object in different fields and sample data training based on a plurality of fields in advance, and can generate the three-dimensional model of the target object.
Meanwhile, the plug-in client with the modeling application program and the modeling application program is operated on the terminal equipment, namely, information interaction between the modeling application program and the cloud server is realized through the plug-in client, the plug-in client can keep relatively light weight, dependence on user computing resources is reduced, local computing burden is reduced, and system response speed is improved.
Optionally, referring to fig. 4, the method further includes:
s401, the plug-in client acquires an update instruction for the target three-dimensional model, which is input by a user on the control interface.
Optionally, if the generated target three-dimensional model does not meet the current latest design requirement of the user, a model needs to be generated again according to the update instruction.
The updating instruction is used for indicating information to be updated of the target workpiece. For example, if the information to be updated of the workpiece 1 is a length, the update instruction only includes the updated length of the workpiece 1, so as to avoid the repeated input of the information not updated by the user, which results in lower processing efficiency.
In one implementation manner, for example, a user may directly perform a determining operation on a target three-dimensional model of a target workpiece displayed on a modeling application program, and input an update instruction for the target three-dimensional model on a control interface of a plug-in client, where the update instruction is, for example, a bolt with a length of 20cm and 120 threads, i.e., the length in a previous design requirement is modified to 20 and the number of threads is modified to 120, and the plug-in client obtains the update instruction for the target three-dimensional model input by the user.
S402, the plug-in client responds to the triggering operation of the user on the update control in the control interface, generates request update information according to the identification of the generation instruction and the information to be updated, and sends the request update information to the cloud server.
The request update information comprises an identification of the generation instruction and information to be updated, and the identification of the generation instruction has uniqueness, namely, the specific content of the generation instruction can be queried according to the identification of the generation instruction.
For example, referring to fig. 5, after an update instruction for a target three-dimensional model is input on a control interface by a user, clicking operation is required to be performed on an update control in the control interface, at this time, a terminal device responds to a trigger operation of the user on the update control, obtains an identification ID of a generation instruction input for the target three-dimensional model before, generates update request information according to the identification ID of the generation instruction and information to be updated, and sends the update request information to a cloud server. Wherein, the request for updating information includes: and generating the identification ID of the instruction and the information to be updated.
S403, the cloud server acquires the identification of the generation instruction and the information to be updated according to the request update information.
S404, the cloud server updates the input parameters corresponding to the generation instruction according to the identification of the generation instruction and the information to be updated, and updated parameters are obtained.
Optionally, the cloud server analyzes the received request update information to obtain an identifier of the generation instruction and information to be updated, queries the storage area in the cloud server for the input parameter of the generation instruction by taking the identifier of the generation instruction as a query condition, and then updates the parameter item to be updated in the input parameter of the generation instruction according to the information to be updated to obtain the updated parameter.
S405, the cloud server inputs the updated parameters into the workpiece generation model to obtain updated codes, and the updated codes are sent to the plug-in client.
Optionally, in this embodiment, the generated object code is updated based on the updated parameter, so as to obtain an updated code, that is, the updated code may be partially the same as or completely different from the previously generated object code, and is mainly determined by the updated parameter.
S406, the plug-in client transmits the updated codes to the modeling application program, the modeling application program runs the updated codes, generates an updated three-dimensional model of the target workpiece, and renders and displays the updated three-dimensional model.
Optionally, the cloud server inputs the updated parameters into the workpiece generation model to obtain updated codes output by the workpiece generation model, the updated codes are sent to the plug-in client, then the plug-in client transmits the updated codes to the modeling application program, the user executes clicking operation on the operation control on the plug-in client, the modeling application program operates the updated codes to generate an updated three-dimensional model of the target workpiece, the updated three-dimensional model is rendered and displayed, and the three-dimensional model meeting the secondary design requirement of the user is generated with high quality and high efficiency. That is, the interactive modification function is introduced in the scheme, so that the user is allowed to directly modify and adjust the existing target model, and the user participation degree and satisfaction degree are improved.
In this embodiment, the user may put forward a modified design requirement based on the existing target three-dimensional model, that is, input an update instruction for the target three-dimensional model on the control interface, the plug-in client returns the identifier of the previous generation instruction and the update instruction to the cloud server, the cloud server updates the input parameters corresponding to the generation instruction according to the identifier of the previous generation instruction and the update instruction, so as to obtain updated parameters, and inputs the updated parameters to the workpiece generation model, so as to obtain an updated code, thereby implementing secondary modification on the target three-dimensional model, and the user can interact with the existing target model in a real-time interaction manner to quickly feed back the modification requirement, thereby greatly shortening the design iteration period and improving the design efficiency and flexibility.
In another implementation manner, for example, a user directly inputs an updated generation instruction for a target three-dimensional model on a control interface, the plug-in client sends the obtained updated generation instruction for the target three-dimensional model to the cloud server, the cloud server analyzes the obtained updated input parameters for the target three-dimensional model according to the updated generation instruction, the updated input parameters are input to the workpiece generation model to obtain updated codes output by the workpiece generation model, the cloud server sends the updated codes to the plug-in client, the plug-in client transmits the updated codes to the modeling application program, the modeling application program runs the updated codes to generate an updated three-dimensional model of the target workpiece, and secondary modification of the target three-dimensional model is achieved, and design efficiency and flexibility are improved.
The following embodiment specifically explains how the plug-in client obtains the request generation information according to the generation instruction.
Optionally, referring to fig. 6, step S202 includes:
S601, compressing the generated instruction by the plug-in client to obtain a compressed instruction.
S602, packaging the compressed instruction into request generation information by the plug-in client.
In this embodiment, the generation instruction considering the target workpiece input by the user on the manipulation interface includes: the image data, the text data, the audio data or the video data, namely, the data types of the generation instructions are more, and the file sizes occupied by the generation instructions of different data types are also different, in order to improve the data transmission efficiency, the compression algorithm is adopted to compress the generation instructions to obtain compressed instructions, namely, the larger image data, the audio data or the video data can be compressed into smaller files, and the smaller files can improve the data transmission speed.
Meanwhile, the compressed instruction can be packaged to obtain the request generation information, namely the compressed instruction is packaged to be the request generation information for transmission, so that the safety problems of illegal data tampering and the like are prevented, and the safety in the data transmission process is improved.
Optionally, the plug-in client may further encrypt the generated instruction to obtain an encrypted instruction.
Alternatively, referring to fig. 7, the step S203 includes:
S701, the cloud server extracts a compressed instruction from the request generation information.
S702, decompressing the compressed instruction by the cloud server to obtain a generated instruction.
S703, the cloud server analyzes the generation instruction according to the data type of the generation instruction to obtain the input parameters corresponding to the generation instruction.
The data type of the generation instruction refers to that the generation instruction is a picture, text, audio or video.
Optionally, the cloud server analyzes the request generation information, extracts a compressed instruction, and then decompresses the compressed instruction by adopting a decompression algorithm to obtain a generation instruction; and finally, the cloud server analyzes the generation instruction according to the data type of the generation instruction to obtain the input parameters corresponding to the generation instruction.
The work piece generation model will be specifically explained by the following examples.
Alternatively, referring to fig. 8, the work piece generation model is obtained as follows:
s801, acquiring an initial code sample set.
Wherein the initial set of code samples comprises: a plurality of unlabeled code samples for a plurality of domains.
Alternatively, the code of an AutoList, CAD, etc. modeling application may be collected, requiring tokenize of the collected code samples to reach the 10 million token level.
In this embodiment, a plurality of code samples in different fields, such as a mechanical field, a building field, a semiconductor field, an optical field, and the like, may be collected, and the collected code samples are cleaned and standardized, so as to ensure that the format of each code sample is consistent, and remove noise.
S802, performing unsupervised training on a pre-selected basic large model by using an initial code sample set to obtain an initial model.
The non-supervised learning process allows the underlying large model to extract useful features and patterns from the large-scale dataset without manually-annotated tag information.
For example, the pre-selected basic large model may be an Llama 2 model.
For example, a large-scale initial code sample set and an unsupervised learning method in multiple fields can be adopted to pretrain the Llama 2 model until the trained model meets the preset convergence condition, and the model meeting the convergence condition is taken as an initial model. Wherein, in the pre-training stage, the Llama 2 model acquires knowledge and features by learning an internal representation of the input data for fine tuning or migration learning on a subsequent specific task.
In the training process, pyTorch is used as a training framework, the training process is optimized by combining DEEPSPEED acceleration library, and training tasks are distributed to a plurality of GPUs for parallel computation by using the distributed training and model parallel function of DEEPSPEED. And mixed precision training is adopted, and a dynamic diagram optimization function of DEEPSPEED is combined so as to reduce memory occupation and accelerate the calculation process. The batch size is selected to be 256 so as to fully utilize the parallel computing capability of the GPU and improve the training efficiency. The learning rate scheme adopts AdamW optimizers with initial learning rate of 1e-4, and uses the learning rate scheduling strategies of Warmup and LINEAR DECAY to perform linear increase and subsequent linear decay of the learning rate in the initial stage of training so as to stabilize the training process and improve the model convergence performance.
S803, acquiring a fine adjustment data set.
Wherein the fine tuning data set comprises: a plurality of instruction samples in a plurality of fields and an actual code corresponding to each instruction sample. I.e. fine-tuning the dataset to annotation data having a specific task.
For example, a plurality of code samples in a plurality of acquired fields may be manually labeled to obtain labels of the code samples, that is, instruction samples corresponding to the code samples, or may be recorded as actual codes corresponding to the instruction samples.
S804, performing full-parameter fine adjustment on the initial model based on the fine adjustment data set to obtain a workpiece generation model.
Fine-tuning (Fine-tuning) refers to further training and parameter adjustment of the initial model obtained by Pre-training by using labeled data of a specific task after the Pre-training (Pre-training) stage, so that the initial model obtains better performance on the target task.
In the fine tuning process, the parameters of the initial model obtained by pre-training are used as initial states, and then training is carried out on the labeled data of a specific task. Typically, only a small amount of tagged data is available for fine tuning, which enables the model to better adapt to the specific requirements of the target task.
In this embodiment, an initial model obtained by pre-training is used as a starting point, then, a plurality of instruction samples in a plurality of fields in a fine adjustment data set with specific tasks and actual codes corresponding to the instruction samples are used to perform full-parameter fine adjustment on the initial model, so as to obtain a workpiece generation model, improve the generalization capability and universality of the workpiece generation model in the field of model generation, process various complex design requirements, and quickly generate a target model meeting user requirements.
Optionally, referring to fig. 9, step S802 includes:
S901, performing rotary position coding on a plurality of unlabeled code samples by adopting a pre-selected coding function to obtain a position coding matrix.
The position codes can enable the model to find out the position characteristics, the position codes are divided into absolute position codes and relative position codes, the absolute position codes only pay attention to single position information and do not have the performance of outward popularization, the sentence input length is restricted, and when the length of an input sentence is larger than the sentence length in training, the position codes of the input sentence exceeding the length part of the training sentence cannot be represented. The rotational position codes (Rotary Position Embedding, roPE) in the relative position codes encode the position information extension positions by rotationally transforming vectors, inspired by complex exponential forms.
Optionally, the pre-selected encoding function is neural tangent nuclear rotation position encoding (neural tangentkernel-Rotary Position Embedding, abbreviated as NTK-RoPE). The key point of the NTK expansion mode is high-frequency extrapolation and low-frequency interpolation, and the implementation method is that the base number base is directly scaled, and the method is similar to the binary code conversion.
S902, performing multi-round iterative training on the basic large model according to the position coding matrix until the basic large model obtained by training converges to obtain an initial model.
In this example, for example, an NTK-RoPE coding function is used to perform rotary position coding on a plurality of unlabeled code samples to obtain a position coding matrix; and performing multiple rounds of iterative training on Llama 2 based on the position coding matrix until the basic large model obtained by training converges to obtain an initial model.
Optionally, referring to fig. 10, step S804 includes:
s1001, randomly selecting a first seed data pair and a second seed data pair from the fine tuning data set, and generating a back-translation model based on the first seed data pair.
Wherein the first seed data pair comprises: the second seed data pair comprises a plurality of code samples and instruction samples marked on the code samples, and the second seed data pair comprises a plurality of unmarked code samples.
It will be appreciated that the model may be trained from two directions, one to generate code from the instructions and the other to generate instructions from the code. Therefore, the scheme provides that the code samples in the first seed data pair and the instruction samples marked on the code samples can be used for training to obtain a back translation model, namely the input of the back translation model is a code, and the output of the back translation model is an instruction.
S1002, inputting each code sample in the second seed data pair into the back translation model to obtain a prediction instruction corresponding to each code sample.
S1003, obtaining an instruction set based on the predicted instructions corresponding to the code samples in the second seed data pair.
In this embodiment, a plurality of unlabeled code samples in the second seed data pair are respectively input into the above-mentioned trained callback model, and a prediction instruction corresponding to each code sample is generated by the callback model. And the method can also adopt a manual auditing mode to audit and complement the predicted instruction corresponding to each code sample, if the predicted instruction corresponding to the first code sample passes the auditing, the predicted instruction corresponding to the first code sample and the first code sample are added into the instruction set until each code sample in the second seed data pair is traversed, the instruction set is obtained, and the manual labeling quantity is further reduced.
And S1004, scoring various sub-data pairs in the instruction set by using the initial model to obtain quality scores of the various sub-data pairs.
Alternatively, it is not necessarily beneficial if they are all used for training, given that not all pairs of seed data in the instruction set resulting from the enhancement process described above are of high quality. Therefore, it is necessary to screen out high quality data from the instruction set and construct a high quality training set.
S1005, screening the instruction set according to the quality scores of the various sub-data pairs in the instruction set to obtain a high-quality training set.
In this embodiment, an initial model is used to score various pairs of seed data in the instruction set, so as to obtain quality scores of the various pairs of seed data, and if the quality scores of the pairs of seed data i in the instruction set are greater than an average value, the pairs of seed data i in the instruction set are used as a high-quality training data pair. The instruction set can be screened by adopting the screening mode, so that a high-quality training set is obtained.
S1006, performing iterative training on the initial model by using the high-quality training set, updating weight parameters of the initial model after each iteration until the updated initial model meets preset convergence conditions, and taking the initial model meeting the convergence conditions as a workpiece generation model.
In one implementation manner, for example, the high-quality training set screened out above may be used to perform multiple rounds of iterative training on the initial model obtained by pre-training, for example, the iterative training is performed for 3 rounds, the weight parameters of the initial model are updated after each iteration until the updated initial model meets the preset convergence condition, and the initial model meeting the convergence condition is used as the workpiece generating model, so that it is ensured that the workpiece generating model obtained by final training can process complex design requirements in different fields, and the method has relatively strong universality.
Alternatively, referring to fig. 11, the step S1004 includes:
S1101, inputting prediction instructions corresponding to the code samples in the various sub-data pairs in the instruction set to the initial model to obtain prediction codes of the code samples in the various sub-data pairs in the instruction set.
S1102, determining the similarity between the predicted codes of the code samples in various sub-data pairs in the instruction set and the code samples, wherein the similarity is used as the quality score of the various sub-data pairs in the instruction set.
In one implementation manner, for example, a prediction instruction corresponding to the code sample 1 of the seed data pair i in the instruction set may be input into the initial model to obtain a prediction code output by the initial model, and the similarity S between the code sample 1 and the prediction code may be obtained by calculating according to the code sample 1 and the prediction code, and then the similarity S is used as a quality score of the seed data pair i in the instruction set.
Optionally, step S1005 includes:
And traversing each seed data pair in the instruction set, and if the quality score of the third seed data pair is smaller than a preset value, removing the third seed data pair from the instruction set until all seed data pairs in the instruction set are traversed, so as to obtain a high-quality training sample.
The third seed data pair is any seed data pair in the instruction set.
In this embodiment, for example, each seed data pair in the instruction set is traversed, and if the quality score of one seed data pair j is smaller than a preset value, the seed data pair j is removed from the instruction set, that is, a seed data pair with a quality score greater than or equal to the preset value is selected as the high-quality data set.
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application, where the terminal device may be an electronic device with a data processing function, such as a smart phone, a computer, or the like.
The terminal device includes: processor 1201, memory 1202.
The memory 1202 is used for storing a program, and the processor 1201 calls the program stored in the memory 1202 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
Claims (11)
1. The workpiece three-dimensional model generation method based on the fine tuning large model is characterized by being applied to a workpiece three-dimensional model generation system, wherein the workpiece three-dimensional model generation system comprises: the cloud modeling method comprises the steps of running a modeling application program and a plug-in client of the modeling application program on the terminal equipment, and the cloud server, wherein the method comprises the following steps:
The plug-in client obtains a generation instruction of a target workpiece input by a user on a control interface of the plug-in client, wherein the generation instruction comprises: the generation instruction is used for indicating characteristic information of the target workpiece;
The plug-in client responds to the triggering operation of a user on a submitting control in the control interface and sends request generation information to the cloud server, wherein the request generation information comprises the generation instruction;
The cloud server obtains input parameters corresponding to the generation instructions according to the request generation information, inputs the input parameters to a workpiece generation model deployed on the cloud server, analyzes and identifies the input parameters corresponding to the generation instructions by the workpiece generation model, generates target codes executable by the modeling application program, and sends the target codes to the plug-in client, wherein the workpiece generation model is obtained by training based on sample data of a plurality of fields in advance;
The plug-in client transmits the target code to the modeling application program, the modeling application program runs the target code, generates a target three-dimensional model of the target workpiece, and renders and displays the target three-dimensional model of the target workpiece;
the workpiece generation model is obtained by the following steps:
Obtaining an initial code sample set, the initial code sample set comprising: a plurality of unlabeled code samples for a plurality of domains;
performing unsupervised training on a pre-selected basic large model by using the initial code sample set to obtain an initial model;
obtaining a fine tuning dataset comprising: a plurality of instruction samples in a plurality of fields and actual codes corresponding to the instruction samples;
and performing full-parameter fine adjustment on the initial model based on the fine adjustment data set to obtain the workpiece generation model.
2. The method as recited in claim 1, further comprising:
the plug-in client acquires an update instruction aiming at the target three-dimensional model and input by the user on the control interface, wherein the update instruction is used for indicating information to be updated of the target workpiece;
The plug-in client responds to the triggering operation of a user on the update control in the control interface, generates request update information according to the identification of the generation instruction and the information to be updated, and sends the request update information to the cloud server, wherein the request update information comprises the identification of the generation instruction and the information to be updated;
The cloud server acquires the identification of the generation instruction and the information to be updated according to the request updating information;
The cloud server updates the input parameters corresponding to the generation instruction according to the identification of the generation instruction and the information to be updated to obtain updated parameters;
the cloud server inputs the updated parameters into the workpiece generation model to obtain updated codes, and the updated codes are sent to the plug-in client;
The plug-in client transmits the updated codes to the modeling application program, the modeling application program runs the updated codes, generates an updated three-dimensional model of the target workpiece, and renders and displays the updated three-dimensional model.
3. The method of claim 1, wherein performing unsupervised training on the pre-selected basic large model using the initial set of code samples to obtain an initial model, comprising:
Performing rotary position coding on a plurality of unlabeled code samples by adopting a pre-selected coding function to obtain a position coding matrix;
And carrying out multi-round iterative training on the basic large model according to the position coding matrix until the basic large model obtained by training converges to obtain an initial model.
4. A method according to claim 3, wherein the pre-selected encoding function is a neural tangent nuclear rotational position encoding.
5. The method of claim 1, wherein performing full parameter trimming on the initial model based on the trimming dataset to obtain the workpiece generation model comprises:
Randomly selecting a first seed data pair and a second seed data pair from the fine tuning data set, and generating a back-translation model based on the first seed data pair; wherein the first seed data pair comprises: the second seed data pair comprises a plurality of unlabeled code samples;
Inputting each code sample in the second seed data pair into the back translation model to obtain a prediction instruction corresponding to each code sample;
Obtaining an instruction set based on the second seed data pair and the prediction instructions corresponding to the code samples;
scoring the various sub-data pairs in the instruction set by using the initial model to obtain quality scores of the various sub-data pairs;
Screening the instruction set according to the quality scores of various sub-data pairs in the instruction set to obtain a high-quality training set;
And carrying out iterative training on the initial model by using the high-quality training set, updating weight parameters of the initial model after each iteration until the updated initial model meets a preset convergence condition, and taking the initial model meeting the convergence condition as the workpiece generation model.
6. The method of claim 5, wherein scoring each of the sub-data pairs in the instruction set using the initial model results in a quality score for each of the sub-data pairs, comprising:
Inputting a prediction instruction corresponding to the code sample in each sub-data pair in the instruction set to the initial model to obtain a prediction code of the code sample in each sub-data pair in the instruction set;
And determining the similarity between the predicted codes of the code samples in the various sub-data pairs in the instruction set and the code samples, and taking the similarity as the quality scores of the various sub-data pairs in the instruction set.
7. The method of claim 5, wherein said screening the instruction set based on the quality scores of the various sub-data pairs in the instruction set to obtain a high quality training set comprises:
And traversing each seed data pair in the instruction set, and if the quality score of a third seed data pair is smaller than a preset value, removing the third seed data pair from the instruction set until all seed data pairs in the instruction set are traversed, so as to obtain a high-quality training sample.
8. The method of claim 1, wherein the sending, by the plug-in client, request generation information to the cloud server in response to a triggering operation of a submit control in the manipulation interface by a user, includes:
The plug-in client compresses the generated instruction to obtain a compressed instruction;
and encapsulating the compressed instruction into the request generation information by the plug-in client.
9. The method of claim 8, wherein the cloud server obtaining, according to the request generation information, the input parameters corresponding to the generation instruction, includes:
The cloud server extracts the compressed instruction from the request generation information;
decompressing the compressed instruction by the cloud server to obtain the generated instruction;
and the cloud server analyzes the generation instruction according to the data type of the generation instruction to obtain the input parameters corresponding to the generation instruction.
10. A terminal device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the terminal device is running, the processor executing the machine-readable instructions to perform the steps of the method of any of claims 1-9.
11. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, performs the method according to any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410558221.4A CN118153129B (en) | 2024-05-08 | 2024-05-08 | Workpiece three-dimensional model generation method, device and medium based on fine tuning large model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410558221.4A CN118153129B (en) | 2024-05-08 | 2024-05-08 | Workpiece three-dimensional model generation method, device and medium based on fine tuning large model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118153129A true CN118153129A (en) | 2024-06-07 |
CN118153129B CN118153129B (en) | 2024-07-12 |
Family
ID=91285284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410558221.4A Active CN118153129B (en) | 2024-05-08 | 2024-05-08 | Workpiece three-dimensional model generation method, device and medium based on fine tuning large model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118153129B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118394321A (en) * | 2024-06-27 | 2024-07-26 | 中国航空工业集团公司金城南京机电液压工程研究中心 | Training sample generation method and device for modeling part three-dimensional solid model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210342490A1 (en) * | 2020-05-04 | 2021-11-04 | Cerebri AI Inc. | Auditable secure reverse engineering proof machine learning pipeline and methods |
CN116048489A (en) * | 2023-01-10 | 2023-05-02 | 山东大学 | Zero code system for target detection and construction method |
CN117932713A (en) * | 2024-03-18 | 2024-04-26 | 中南民族大学 | Cloud native CAD software gesture interaction geometric modeling method, system, device and equipment |
-
2024
- 2024-05-08 CN CN202410558221.4A patent/CN118153129B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210342490A1 (en) * | 2020-05-04 | 2021-11-04 | Cerebri AI Inc. | Auditable secure reverse engineering proof machine learning pipeline and methods |
CN116048489A (en) * | 2023-01-10 | 2023-05-02 | 山东大学 | Zero code system for target detection and construction method |
CN117932713A (en) * | 2024-03-18 | 2024-04-26 | 中南民族大学 | Cloud native CAD software gesture interaction geometric modeling method, system, device and equipment |
Non-Patent Citations (1)
Title |
---|
卢红, 黄文华: "DNC网络数控加工及后置处理技术的研究与应用", 机械制造, no. 09, 20 September 2005 (2005-09-20) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118394321A (en) * | 2024-06-27 | 2024-07-26 | 中国航空工业集团公司金城南京机电液压工程研究中心 | Training sample generation method and device for modeling part three-dimensional solid model |
Also Published As
Publication number | Publication date |
---|---|
CN118153129B (en) | 2024-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN118153129B (en) | Workpiece three-dimensional model generation method, device and medium based on fine tuning large model | |
CN110612538B (en) | Generating discrete potential representations of input data items | |
CN112418292B (en) | Image quality evaluation method, device, computer equipment and storage medium | |
WO2023184759A1 (en) | Method and apparatus for completing shape of three-dimensional object, and device and storage medium | |
CN117454495B (en) | CAD vector model generation method and device based on building sketch outline sequence | |
CN115601485B (en) | Data processing method of task processing model and virtual character animation generation method | |
CN114998583B (en) | Image processing method, image processing apparatus, device, and storage medium | |
CN115757725A (en) | Question and answer processing method and device, computer equipment and storage medium | |
CN116958325A (en) | Training method and device for image processing model, electronic equipment and storage medium | |
CN116975357A (en) | Video generation method, device, electronic equipment, storage medium and program product | |
CN117079651B (en) | Speech cross real-time enhancement implementation method based on large-scale language model | |
CN117274450A (en) | Animation image generation system and method based on artificial intelligence | |
CN113033337A (en) | TensorRT-based pedestrian re-identification method and device | |
CN117557708A (en) | Image generation method, device, storage medium and computer equipment | |
Li et al. | Efficient spatially sparse inference for conditional gans and diffusion models | |
CN115906863B (en) | Emotion analysis method, device, equipment and storage medium based on contrast learning | |
CN114333069B (en) | Object posture processing method, device, equipment and storage medium | |
CN111767395B (en) | Abstract generation method and system based on pictures | |
CN114596203A (en) | Method and apparatus for generating images and for training image generation models | |
CN111008276B (en) | Complete entity relationship extraction method and device | |
Ali et al. | Implementation of image processing system using handover technique with map reduce based on big data in the cloud environment. | |
WO2024174583A9 (en) | Model training method and apparatus, and device, storage medium and product | |
US20230316474A1 (en) | Enhancing detailed segments in latent code-based edited digital images | |
CN118155270B (en) | Model training method, face recognition method and related equipment | |
CN117786147B (en) | Method and device for displaying data in digital twin model visual field range |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |