CN117807962A - Method and device for writing consultation administrative texts, storage medium and electronic equipment - Google Patents

Method and device for writing consultation administrative texts, storage medium and electronic equipment Download PDF

Info

Publication number
CN117807962A
CN117807962A CN202410236984.7A CN202410236984A CN117807962A CN 117807962 A CN117807962 A CN 117807962A CN 202410236984 A CN202410236984 A CN 202410236984A CN 117807962 A CN117807962 A CN 117807962A
Authority
CN
China
Prior art keywords
consultation
text
model
writing
administrative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410236984.7A
Other languages
Chinese (zh)
Inventor
董波
林苗
李亚玲
蔡京京
柏洁明
牛大明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202410236984.7A priority Critical patent/CN117807962A/en
Publication of CN117807962A publication Critical patent/CN117807962A/en
Pending legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The specification discloses a method, a device, a storage medium and electronic equipment for writing an consultation administrative text. In the method for writing the consultation administrative text provided by the specification, a sample consultation administrative report text and a pre-trained large language model are obtained; the sample consultation report text is used as a training sample, and the large language model is finely adjusted through a LoRA method to obtain a consultation expert large model; inputting a writing instruction to the large-scale model of the consultation administrative specialist according to a writing target, so that the large-scale model of the consultation administrative specialist generates a consultation administrative text conforming to the writing target.

Description

Method and device for writing consultation administrative texts, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for writing a document of a reference, a storage medium, and an electronic device.
Background
With the continued development of artificial intelligence technology and large language models (Large Language Model, LLM), more and more users began using large language models for language class generation artificial intelligence (Artificial Intelligence Generated Content, AIGC) authoring. Especially in the field of writing, LLM can help users to form a draft, modify and moisten colors and form texts by virtue of strong language processing capability, so that the writing efficiency is improved, the writing threshold is reduced, and part of users which do not have writing capability originally can also produce writing texts.
However, because the existing large language model is obtained by training according to the universal corpus of the Internet, the limitation is obvious, namely, only general writing articles can be generated, the understanding and the expression of the specific field are lacking, the AICG result is poor and the AICG is difficult to provide effective help for the work of users.
For example, the consulting government writing is a subdivision field in the official document writing, and the content of the consulting government writing includes development planning, investigation report and the like. Besides the written text, expression and logic of general official document, the format and requirement of the consultation report have certain specificity, the contents such as the current situation, the problems, the countermeasure suggestions and the like of the subject related to the article need to be clearly described, the logic of the whole article needs to be ensured, the analysis of the problems needs to be considerable, and the countermeasure suggestions have constructivity. These are all goals that are not achieved by current AIGC techniques based on a generic large language model.
Therefore, how to better perform AIGC authoring in the field of consultation using LLM is a problem to be solved urgently.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a storage medium, and an electronic device for writing an consultation text, so as to at least partially solve the foregoing problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides an consultation administration text writing method, which comprises the following steps:
acquiring a sample consultation report text and a pre-trained large language model;
the sample consultation report text is used as a training sample, and the large language model is finely adjusted through a LoRA method to obtain a consultation expert large model;
inputting a writing instruction to the large-scale model of the consultation administrative specialist according to a writing target, so that the large-scale model of the consultation administrative specialist generates a consultation administrative text conforming to the writing target.
Optionally, the sample consultation report text includes at least policy regulations, planning plans, policy suggestions, policy theory academic articles.
Optionally, before the sample consultation report text is used as a training sample and the macro language model is fine-tuned by the lorea method, the method further includes:
classifying the sample consultation report text to obtain a general consultation corpus and a plurality of professional field consultation corpora, wherein the classifying process at least comprises data cleaning, text screening, structure mapping and semantic disambiguation;
taking the sample consultation report text as a training sample specifically comprises the following steps:
and taking the general consultation administrative corpus and the consultation administrative corpus in each professional field as training samples.
Optionally, inputting a writing instruction to the large-scale model of the consultation specialist according to a writing target, so that the large-scale model of the consultation specialist generates the consultation text conforming to the writing target, which specifically includes:
inputting a writing instruction to the large-size model of the consultation administrative specialist according to a writing target, so that the large-size model of the consultation administrative specialist generates a text outline conforming to the writing target, text paragraphs matched with the text outline and text texts formed according to the text paragraphs.
Optionally, the method further comprises:
inputting the large information management text material into the large information management expert model, and inputting an extraction instruction into the large information management expert model so that the large information management expert model extracts key information in the large information management text material.
Optionally, the method further comprises:
inputting a target problem into the large index and administration expert model, and inputting an analysis instruction into the large index and administration expert model so that the large index and administration expert model outputs index and administration discussion points corresponding to the target problem.
Optionally, the method further comprises:
inputting the large information management text to be optimized into the large information management expert model, inputting an optimization instruction into the large information management expert model, enabling the large information management expert model to optimize the information management text to be optimized, and outputting the optimized information management text.
The description provides a device is write for writing in consultation administrative text, the device includes:
the acquisition module is used for acquiring a sample consultation report text and a pre-trained large language model;
the training module is used for taking the sample consultation report text as a training sample, and performing fine adjustment on the large language model through a LoRA method to obtain a consultation expert large model;
and the writing module is used for inputting writing instructions to the large-scale consultation specialist model according to the writing target so as to enable the large-scale consultation specialist model to generate consultation texts conforming to the writing target.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the above-described method of writing a reference text.
The present specification provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the above-mentioned method of writing the reference text when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the method for writing the consultation administrative text provided by the specification, a sample consultation administrative report text and a pre-trained large language model are obtained; the sample consultation report text is used as a training sample, and the large language model is finely adjusted through a LoRA method to obtain a consultation expert large model; inputting a writing instruction to the large-scale model of the consultation administrative specialist according to a writing target, so that the large-scale model of the consultation administrative specialist generates a consultation administrative text conforming to the writing target.
When the method for writing the consulting text provided by the specification is used for generating the consulting text through the large language model, the sample consulting report text collected from various channels can be used for carrying out directional training on the large language model to obtain a large consulting expert model with the expertise in the consulting field, and the consulting text generated by the large consulting expert model is obtained through instruction interaction with the large consulting expert model. According to the method, the large language model and the corpus in the field of the consultation management are combined, so that the large consultation management expert model for the consultation management writing can be simply and efficiently constructed, the threshold for the consultation management report writing is effectively reduced, and the writing efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
fig. 1 is a schematic flow chart of an consultation text writing method in the present specification;
fig. 2 is a schematic diagram of an consulting administration text writing device provided in the present specification;
fig. 3 is a schematic view of the electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present application based on the embodiments herein.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of an consultation text writing method in the present specification, specifically including the following steps:
s100: and acquiring a sample consultation report text and a pre-trained large language model.
All steps in the method for writing the consultation text provided in the present specification can be implemented by any electronic device with a computing function, such as a terminal, a server, and the like.
The main purpose of the method is to compose text content in the field of consultation by means of a large language model based on artificial intelligence (Artificial Intelligence, AI). As introduced in the background art, most of existing open-source large language models are trained by adopting conventional general corpus, and do not have specific knowledge in a specific field of refinement. Therefore, in the method, the large language model for grasping the expertise in the field of the consultation is obtained by carrying out the directional training in the field of the consultation on the large language model. Based on this, in this step, first, a sample consultation report text that can be used for training and a large language model as a training object can be acquired.
The sample administrative report text may be obtained by means of data purchase, interfacing with a network open source application programming interface (Application Programming Interface, API), internal acquisition, self-help introduction, etc., and the obtained sample administrative report text may include, but is not limited to, policy regulations, planning plans, policy suggestions, various policy theory academic articles, etc. in government documents such as country and each department, province and each office, province city, developed area setting city, etc.
The large language model as the training target is a large language model which is trained in advance, can be a large language model obtained by self training, and can also be an open source large language model obtained through various ways, and the specification is not particularly limited.
S102: and taking the sample consultation report text as a training sample, and performing fine adjustment on the large language model through a LoRA method to obtain a consultation expert large model.
In this step, the sample administrative report text obtained in step S100 may be used as a training sample, and the large language model obtained in step S100 may be fine-tuned to obtain an expert large model of the consultation administrative. The large expert model is a large language model which additionally grasps the expertise of the consultation administrative field on the basis of the capability of maintaining the large language model. When the large language model is trimmed, the large language model can be further adapted and trained by using a preset natural language processing task through a method of fine adjustment of LoRA (Low-Rank Adaptation of Large Language Models), for example, so that fine adjustment of the large language model is realized. The natural language processing task may be set according to specific requirements, for example, tasks such as entity prediction, text classification, dialogue question and answer, and the present specification does not specifically limit the task.
LoRA is a novel technique aimed at solving the problem of fine tuning large language models. Because the parameters of large language models such as GPT are huge, the training cost is high when the parameters are regulated in a common mode, and the LoRA greatly reduces the quantity of the trainable parameters and the memory requirement by freezing the weight of the pre-training model and injecting a trainable layer (called a rank decomposition matrix) into each transducer block. The fine tuning quality realized by the LoRA method is equivalent to that of the whole model, and meanwhile, the speed is higher, and the calculation requirement is lower. In the method, when a large language model is used for adapting to a natural language processing task, a good effect can be achieved by training a small amount of parameters.
Of course, the acquired various specimen index report texts can also be directly input into the large language model, so as to strengthen the knowledge of the large language model. This approach allows for easier and more convenient fine-tuning of large language models, but is relatively less effective than training by adapting natural language processing tasks.
Further, since the content included in the sample information report text collected through various channels is different in step S100, the sample information report text may be preferentially classified before the step is performed. Specifically, before the sample administrative report text is used as a training sample and the large language model is subjected to fine tuning by a LoRA method, the sample administrative report text is subjected to classification processing to obtain a general administrative corpus and a plurality of professional field administrative corpora, wherein the classification processing at least comprises data cleaning, text screening, structural mapping and semantic disambiguation.
It is conceivable that the collected sample consultation report text has a larger cardinality and uneven quality, and not all sample consultation report texts can be used as effective training samples. Therefore, in the classifying process, the sample information report text with poor quality can be filtered out through data cleaning, so that invalid data processing amount is reduced.
The sample consultation report text can be divided into different corpora through classification processing. The general consultation administrative corpus is a set of texts which are more general and have no dissimilarity in the consultation administrative field; the professional domain consultation corpus is a corpus which is further subdivided into fine-grained topics under the consultation domain, such as topics with specificity and specificity of education, agriculture, economy, society and the like. In different professional domain consultation corpora, different expressions for the same technical term may be involved.
At this time, the general-purpose consultation corpus subjected to further classification processing and the consultation corpus in each professional field can be used as training samples, and the large language model can be trained and fine-tuned to obtain the consultation expert large model.
S104: inputting a writing instruction to the large-scale model of the consultation administrative specialist according to a writing target, so that the large-scale model of the consultation administrative specialist generates a consultation administrative text conforming to the writing target.
In this step, the large model of the consulting administrative specialist obtained through the training fine tuning in step S102 may be used to generate the consulting administrative text, and this process may be implemented by inputting a writing instruction into the large model of the consulting administrative specialist. The writing instruction is a sentence capable of expressing the collaboration of the large model of the consultation specialist, and the writing instruction should contain a writing target so as to enable the large model of the consultation specialist to write in a targeted manner. In the method, the authoring target may be a target of a text generated according to a desired consulting expert large model given by the user's idea. Because the large model of the consulting administrative specialist has excellent language understanding capability, the writing instruction can be input into the large model of the consulting administrative specialist in the form of any general statement according to writing targets and other specific requirements.
More preferably, the method can display the subdivided complete writing flow to the user when the large-model of the consulting government expert is adopted to generate the consulting government text. Specifically, a writing instruction can be input to the large-scale model of the consultation administrative specialist according to a writing target, so that the large-scale model of the consultation administrative specialist generates a text outline conforming to the writing target, text paragraphs matched with the text outline and text texts formed according to the text paragraphs.
The outline refers to the current development state, problems and countermeasure suggestions in a specific field, and has a very obvious hierarchical structure; the paragraphs refer to text paragraphs which are matched optimally according to the generated outline title in a recommended mode so as to accord with writing logic; the full text refers to that each outline and paragraph are spliced and combined to obtain the information administrative text with clear and consistent upper and lower logics and creativity and novelty.
In addition, besides the generation of the most basic consultation text, the consultation text writing method provided by the specification can also provide detail problems in the writing process and provide an interactive solution. For a public staff with a certain writing experience, besides the document generation capability, the document needs to be improved and optimized so as to obtain the best-effect consultation text. Based on this, the large model of the consultation government specialist adopted in the method additionally provides other functions required in the writing process of the consultation government text, and the functions are realized by means of downstream tasks, and are respectively described below.
One, in particular, the downstream tasks may include an information extraction task; the large index expert model can be input or input with the large index expert text material, and an extraction instruction is input to the large index expert model, so that the large index expert model extracts key information in the index text material.
Secondly, specifically, the downstream tasks may include a problem analysis task; the target problem can be input into the large consultation expert model, and an analysis instruction is input into the large consultation expert model, so that the large consultation expert model outputs the consultation discussion corresponding to the target problem.
Thirdly, specifically, the downstream task may include a write-expansion color-rendering task; the large information management text to be optimized can be input into the large information management expert model, and an optimization instruction is input into the large information management expert model, so that the large information management expert model optimizes the information management text to be optimized, and the optimized information management text is output.
The information material means that when a user expresses a view, specific case support is needed, and the large-scale index language model can be extracted from the existing text according to the input of the user; the analysis problem means that the individual users have problems and countermeasure proposal parts, and the capability is limited, so that the descriptions of the constructive opinions cannot be generated, and the creativity is lacking; the expanded writing and color rendering refers to extracting and sublimating the existing text, so that the text meets the document line requirements and has a large appearance.
When the method for writing the consulting text provided by the specification is used for generating the consulting text through the large language model, the sample consulting report text collected from various channels can be used for carrying out directional training on the large language model to obtain a large consulting expert model with the expertise in the consulting field, and the consulting text generated by the large consulting expert model is obtained through instruction interaction with the large consulting expert model. According to the method, the large language model and the corpus in the field of the consultation management are combined, so that the large consultation management expert model for the consultation management writing can be simply and efficiently constructed, the threshold for the consultation management report writing is effectively reduced, and the writing efficiency is improved.
The above description is provided with the method for writing the consultation administrative text, and based on the same thought, the description also provides a corresponding device for writing the consultation administrative text, as shown in fig. 2.
Fig. 2 is a schematic diagram of an embodiment of the present disclosure, which specifically includes:
the acquisition module 200 is used for acquiring a sample consultation report text and a pre-trained large language model;
the training module 202 is configured to use the sample consultation report text as a training sample, and perform fine tuning on the large language model by using a lorea method to obtain a consultation expert large model;
and the writing module 204 is configured to input a writing instruction to the large-scale index and administration expert model according to a writing target, so that the large-scale index and administration expert model generates an index and administration text conforming to the writing target.
Optionally, the sample consultation report text includes at least policy regulations, planning plans, policy suggestions, policy theory academic articles.
Optionally, the apparatus further includes a classification module 206, specifically configured to perform classification processing on the sample administrative report text to obtain a general administrative corpus and a plurality of professional domain administrative corpora, where the classification processing at least includes data cleaning, text screening, structure mapping, semantic disambiguation, and the like;
the training module 202 is specifically configured to use the general-purpose consultation corpus and the consultation corpus of each professional field as training samples.
Optionally, the writing module 204 is specifically configured to input a writing instruction to the large-size module of the reference administrative expert according to a writing target, so that the large-size module of the reference administrative expert generates a text outline conforming to the writing target, a text paragraph matching with the text outline, and a text full text formed according to the text paragraph.
Optionally, the apparatus further includes an extraction module 208, specifically configured to input the large-scale information management expert model with the large-scale information management expert model, and input an extraction instruction to the large-scale information management expert model, so that the large-scale information management expert model extracts the key information in the large-scale information management text information.
Optionally, the apparatus further includes an analysis module 210, specifically configured to input a target problem into the large-scale index-administration expert model, and input an analysis instruction to the large-scale index-administration expert model, so that the large-scale index-administration expert model outputs an index-administration discussion point corresponding to the target problem.
Optionally, the apparatus further includes an optimization module 212, specifically configured to input an information administrative expert text to be optimized into the information administrative expert big model, and input an optimization instruction into the information administrative expert big model, so that the information administrative expert big model optimizes the information administrative text to be optimized, and outputs the optimized information administrative text.
The present specification also provides a computer readable storage medium storing a computer program, where the computer program is configured to perform the method for writing the administrative text provided in fig. 1.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 3. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 3, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to implement the method for writing the consultation text described in fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Improvements to one technology can clearly distinguish between improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) and software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present application.

Claims (10)

1. An method for writing an consultation administrative text, which is characterized by comprising the following steps:
acquiring a sample consultation report text and a pre-trained large language model;
the sample consultation report text is used as a training sample, and the large language model is finely adjusted through a LoRA method to obtain a consultation expert large model;
inputting a writing instruction to the large-scale model of the consultation administrative specialist according to a writing target, so that the large-scale model of the consultation administrative specialist generates a consultation administrative text conforming to the writing target.
2. The method of claim 1, wherein the sample consulting report text includes at least policy regulations, planning plans, policy suggestions, policy theoretical academic articles.
3. The method of claim 1, wherein prior to fine tuning the large language model by the lorea method using the sample consultation report text as a training sample, the method further comprises:
classifying the sample consultation report text to obtain a general consultation corpus and a plurality of professional field consultation corpora, wherein the classifying process at least comprises data cleaning, text screening, structure mapping and semantic disambiguation;
taking the sample consultation report text as a training sample specifically comprises the following steps:
and taking the general consultation administrative corpus and the consultation administrative corpus in each professional field as training samples.
4. The method of claim 1, wherein a writing instruction is input to the large-size module of the member government specialist according to a writing target, so that the large-size module of the member government specialist generates a member text conforming to the writing target, specifically comprising:
inputting a writing instruction to the large-size model of the consultation administrative specialist according to a writing target, so that the large-size model of the consultation administrative specialist generates a text outline conforming to the writing target, text paragraphs matched with the text outline and text texts formed according to the text paragraphs.
5. The method of claim 1, wherein the method further comprises:
inputting the large information management text material into the large information management expert model, and inputting an extraction instruction into the large information management expert model so that the large information management expert model extracts key information in the large information management text material.
6. The method of claim 1, wherein the method further comprises:
inputting a target problem into the large index and administration expert model, and inputting an analysis instruction into the large index and administration expert model so that the large index and administration expert model outputs index and administration discussion points corresponding to the target problem.
7. The method of claim 1, wherein the method further comprises:
inputting the large information management text to be optimized into the large information management expert model, inputting an optimization instruction into the large information management expert model, enabling the large information management expert model to optimize the information management text to be optimized, and outputting the optimized information management text.
8. An consulting administration text writing device, comprising:
the acquisition module is used for acquiring a sample consultation report text and a pre-trained large language model;
the training module is used for taking the sample consultation report text as a training sample, and performing fine adjustment on the large language model through a LoRA method to obtain a consultation expert large model;
and the writing module is used for inputting writing instructions to the large-scale consultation specialist model according to the writing target so as to enable the large-scale consultation specialist model to generate consultation texts conforming to the writing target.
9. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
CN202410236984.7A 2024-03-01 2024-03-01 Method and device for writing consultation administrative texts, storage medium and electronic equipment Pending CN117807962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410236984.7A CN117807962A (en) 2024-03-01 2024-03-01 Method and device for writing consultation administrative texts, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410236984.7A CN117807962A (en) 2024-03-01 2024-03-01 Method and device for writing consultation administrative texts, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117807962A true CN117807962A (en) 2024-04-02

Family

ID=90433893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410236984.7A Pending CN117807962A (en) 2024-03-01 2024-03-01 Method and device for writing consultation administrative texts, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117807962A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070094183A1 (en) * 2005-07-21 2007-04-26 Microsoft Corporation Jargon-based modeling
CN113254574A (en) * 2021-03-15 2021-08-13 河北地质大学 Method, device and system for auxiliary generation of customs official documents
CN114492327A (en) * 2021-12-28 2022-05-13 中科曙光南京研究院有限公司 Intelligent writing method for official documents
CN116842152A (en) * 2023-06-09 2023-10-03 福建省科学技术信息研究所(福建省生产力促进中心) Science and technology policy question-answering method and device for fine-tuning language big model
CN117473072A (en) * 2023-12-28 2024-01-30 杭州同花顺数据开发有限公司 Financial research report generation method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070094183A1 (en) * 2005-07-21 2007-04-26 Microsoft Corporation Jargon-based modeling
CN113254574A (en) * 2021-03-15 2021-08-13 河北地质大学 Method, device and system for auxiliary generation of customs official documents
CN114492327A (en) * 2021-12-28 2022-05-13 中科曙光南京研究院有限公司 Intelligent writing method for official documents
CN116842152A (en) * 2023-06-09 2023-10-03 福建省科学技术信息研究所(福建省生产力促进中心) Science and technology policy question-answering method and device for fine-tuning language big model
CN117473072A (en) * 2023-12-28 2024-01-30 杭州同花顺数据开发有限公司 Financial research report generation method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曾文龙等: ""基于大模型的智能抄清: 事件要点抽取与报告生成"", 《网络安全与数据治理》, vol. 42, no. 12, 15 December 2023 (2023-12-15), pages 20 - 26 *
龙志勇等: "《大模型时代 ChatGPT开启通用人工智能浪潮》", 31 May 2023, 中译出版社, pages: 118 - 133 *

Similar Documents

Publication Publication Date Title
Simon et al. Heuristic problem solving: The next advance in operations research
Thrift Movement-space: The changing domain of thinking resulting from the development of new kinds of spatial awareness
CN110717017A (en) Method for processing corpus
Van Zundert Screwmeneutics and hermenumericals: the computationality of hermeneutics
CN111144126A (en) Training method of semantic analysis model, semantic analysis method and device
CN109214193B (en) Data encryption and machine learning model training method and device and electronic equipment
Kothari et al. Artificial intelligence and journalism: An Agenda for journalism research in Africa
Bateman Multimodality, where next?–Some meta-methodological considerations
Barbuti et al. An integrated management system for multimedia digital library
Münker Media in use: how the practice shapes the mediality of media
CN116644168A (en) Interactive data construction method, device, equipment and storage medium
Kennedy Chaos media: a sonic economy of digital space
Kjelldahl Multimedia: systems, interaction and applications
CN117371532A (en) Knowledge base management method, system, equipment and medium
Baeva et al. Ontology based resource for history education
CN117409466A (en) Three-dimensional dynamic expression generation method and device based on multi-label control
CN117807962A (en) Method and device for writing consultation administrative texts, storage medium and electronic equipment
Hirsbrunner Negotiating the data deluge on YouTube: practices of knowledge appropriation and articulated ambiguity around visual scenarios of sea-level rise futures
Tennis Ranganathan's layers of classification theory and the FASDA model of classification
Bartička et al. Evaluating attribution methods for explainable nlp with transformers
Madjarov et al. Web genre classification via hierarchical multi-label classification
Haworth Protentions and retentions of Xenakis and Cage: Nonhuman actors, genre and time in microsound
Taniguchi Event‐aware FRBR and FRAD models: are they useful?
CN117332282B (en) Knowledge graph-based event matching method and device
Htet et al. ChatGPT in Content Creation: Techniques, Applications, and Ethical Implications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination