CN115442324A - Message generation method, message generation device, message management device, and storage medium - Google Patents

Message generation method, message generation device, message management device, and storage medium Download PDF

Info

Publication number
CN115442324A
CN115442324A CN202110629209.4A CN202110629209A CN115442324A CN 115442324 A CN115442324 A CN 115442324A CN 202110629209 A CN202110629209 A CN 202110629209A CN 115442324 A CN115442324 A CN 115442324A
Authority
CN
China
Prior art keywords
message
generator
code
topological graph
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110629209.4A
Other languages
Chinese (zh)
Other versions
CN115442324B (en
Inventor
邢彪
张汉良
丁东
胡皓
陈嫦娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110629209.4A priority Critical patent/CN115442324B/en
Publication of CN115442324A publication Critical patent/CN115442324A/en
Application granted granted Critical
Publication of CN115442324B publication Critical patent/CN115442324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • H04L51/066Format adaptation, e.g. format conversion or compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a message generating method, which comprises the following steps: receiving message data to be sent; converting the message data to be sent into a first message structure topological graph; inputting the first message structure topological graph into a generator obtained by training so as to generate a transmitted message code in a preset format; and generating a distributed message based on the distributed message code. The invention also discloses a message generating device, a message management device and a computer readable storage medium. The method of the invention has higher efficiency of generating the down-sending message.

Description

Message generation method, message generation device, message management device, and storage medium
Technical Field
The present invention relates to the field of information processing, and in particular, to a message generation method, an apparatus, a message management device, and a computer-readable storage medium.
Background
The message service provides a sending function and a receiving function of media contents such as texts, pictures, audio, video, positions, contacts and the like for users based on a native short message entrance of the mobile terminal, and the messages comprise point-to-point messages, group sending messages, group chat messages, point-to-application messages and the like. Compared with the traditional short message with single function, the message not only widens the message receiving and sending range, supports the user to use multimedia contents such as text, audio and video, cards, positions and the like, but also extends the depth of interactive experience, and the user can complete services such as service search, discovery, interaction, payment and the like in a message window to construct a message window of one-stop service.
In the related art, a message generation method is disclosed, in which an industry technician manually converts message data to be sent by a user to obtain a corresponding message code, and obtains a final issued message by using the message code.
However, the efficiency of obtaining the issued message is low by adopting the existing message generation method.
Disclosure of Invention
The invention mainly aims to provide a message generation method, a message generation device, message management equipment and a computer readable storage medium, and aims to solve the technical problem that the efficiency of issuing messages is low by adopting the existing message generation method in the prior art.
In order to achieve the above object, the present invention provides a message generating method, including the following steps:
receiving message data to be sent;
converting the message data to be sent into a first message structure topological graph;
inputting the first message structure topological graph into a generator obtained by training so as to generate a transmitted message code in a preset format;
and generating a distributed message based on the distributed message code.
Optionally, before the step of receiving message data to be sent, the method further includes:
acquiring a first training sample, wherein the first training sample comprises first historical message data to be sent and a first real message code corresponding to the first historical message data to be sent;
converting the first historical message data to be sent into a second message structure topological graph;
and training an initial generator by utilizing the second message structure topological graph, the first real message code and the discriminant obtained by training so as to obtain the generator.
Optionally, before the step of training an initial generator by using the second message structure topological graph, the first real message code and the discriminant obtained by training, to obtain the generator, the method further includes:
acquiring a second training sample, wherein the second training sample comprises second historical to-be-sent message data and a second real message code corresponding to the second historical to-be-sent message data;
converting the second historical message data to be sent into a third message structure topological graph;
and training an initial discriminator by utilizing the third message structure topological graph, the second real message code and the initial generator to obtain the discriminator.
Optionally, the step of training an initial discriminator by using the third message structure topological graph, the second real message code, and the initial generator to obtain the discriminator includes:
determining a first selected message structure topological graph in the third message structure topological graph, and determining a first selected message code corresponding to the first selected message structure topological graph in the second real message code;
obtaining a first selected random noise corresponding to the first selected message structure topology map;
generating a first resulting message code with the initial generator based on the first selected message structure topology map and the first selected random noise;
merging a first set of information pairs and a second set of information pairs to obtain a third set of information pairs, wherein the first selected message structure topological graph and the first selected message code form information pairs in the first set of information, and the first selected message structure topological graph and the first result message code form information pairs in the second set of information pairs;
assigning values to each information pair in the first information pair set, each information pair in the second information pair set and each information pair in the third information pair set respectively;
obtaining a first target parameter based on a first target function, the assignment of the first information pair to each information pair in the set, the assignment of the second information pair to each information pair in the set and the assignment of the third information pair to each information pair in the set;
updating the initial discriminator by using the first target parameter to obtain a new discriminator;
and taking the new discriminator as the initial discriminator, and returning to execute the step of determining the first selected message structure topological graph from the third message structure topological graph until the first target parameter meets a first preset condition to obtain the discriminator.
Optionally, the step of training an initial generator by using the second message structure topological graph, the first true message code, and a training obtained arbiter, to obtain the generator includes:
determining a second selected message structure topological graph in the second message structure topological graph, and determining a second selected message code corresponding to the second selected message structure topological graph in the first real message code;
obtaining a second selected random noise corresponding to the second selected message structure topology map;
generating a second resulting message code with the initial generator based on the second selected message structure topology map and the second selected random noise;
coding the second result message into the discriminator to obtain a judgment result;
obtaining a second target parameter based on a second target function and the judgment result;
updating the initial generator with the second target parameter to obtain a new generator;
and taking the new generator as the initial generator, and returning to execute the step of determining a second selected message structure topological graph from the second message structure topological graph until a second target parameter meets a second preset condition to obtain the generator.
Optionally, the step of generating a first result message code by using the initial generator based on the first selected message structure topology map and the first selected random noise includes:
inputting the first selected random noise into a noise encoder in the initial generator to obtain a first feature vector;
inputting a first selected message structure topology map into a topology encoder in the initial generator to obtain a second feature vector;
inputting the first feature vector and the second feature vector into a merging layer in the initial generator to obtain a merged vector;
inputting the merged vector into a sequence decoder in the initial generator to obtain an output message code sequence;
inputting the output message code sequence into a first output layer in the initial generator to obtain a first resulting message code.
Alternatively to this, the first and second parts may,
the noise encoder comprises three layers of long and short memory neural networks, each layer of long and short memory neural network is provided with 32 neurons, and an activation function is relu;
the topology encoder comprises a first graph convolution layer, a second graph convolution layer and a third graph convolution layer; the number of convolution kernels of the first graph convolution layer is 256, and the activation function is relu; the number of convolution kernels of the second graph convolution layer is 128, and the activation function is relu; the number of convolution kernels of the third graph convolution layer is 64, and the activation function is lambda;
the sequence decoder comprises three layers of long-short term memory neural networks, the number of neurons of each layer of long-short term memory neural network is set to be 128, and an activation function is set to be relu;
the activation function of the first output layer is softmax, and the number of fully-connected neurons of the first output layer is the same as the output dimension of the first result message code;
the initial arbiter comprises a plurality of fully-connected layers and a second output layer, wherein each fully-connected layer comprises 64 neurons, the activation function of each fully-connected layer is relu, the second output layer comprises one fully-connected neuron, and the activation function of the second output layer is sigmoid.
In addition, to achieve the above object, the present invention further provides a message generating apparatus, including:
the receiving module is used for receiving message data to be sent;
the first conversion module is used for converting the message data to be sent into a first message structure topological graph;
the generating module is used for inputting the first message structure topological graph into a generator obtained by training so as to generate a transmitted message code in a preset format;
and the second conversion module is used for generating the issued message based on the issued message code.
In addition, to achieve the above object, the present invention further provides a message management apparatus, including: memory, a processor and a computer program stored on the memory and running on the processor, which computer program, when executed by the processor, carries out the steps of the message generation method as claimed in any one of the preceding claims.
Furthermore, to achieve the above object, the present invention also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the message generation method according to any one of the above.
The technical scheme of the invention provides a message generation method, which is applied to message management equipment and comprises the following steps: receiving message data to be sent; converting the message data to be sent into a first message structure topological graph; inputting the first message structure topological graph into a generator obtained by training so as to generate a transmitted message code in a preset format; and generating a distributed message based on the distributed message code.
In the existing message generation method, an industry technician is required to manually convert message data to be sent to obtain a message code, and a final issued message is obtained based on the message code, so that the message code obtaining time is longer, and the issued message obtaining efficiency is lower. In the invention, the message management equipment directly utilizes the generator obtained by training to generate the issued message code in the preset format based on the first message structure topological graph corresponding to the message data to be sent, and does not need to manually convert the message data to be sent by an industrial technician, thereby reducing the conversion time and improving the acquisition efficiency of the issued message.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the embodiments or technical solutions of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a message management device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a message generation method according to the present invention;
fig. 3 is a schematic structural diagram of message data to be transmitted according to the present invention;
FIG. 4 is a schematic diagram of the initial generator data processing flow of the present invention;
FIG. 5 is a schematic diagram of the initial generator and initial arbiter training process of the present invention;
fig. 6 is a block diagram showing the structure of a first embodiment of the message generation apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The main solution of the embodiment of the application is as follows: a message generation method is provided, which is applied to message management equipment and comprises the following steps: receiving message data to be sent; converting the message data to be sent into a first message structure topological graph; inputting the first message structure topological graph into a generator obtained by training so as to generate a transmitted message code in a preset format; and generating a distributed message based on the distributed message code.
In the existing message generation method, an industry technician is required to manually convert message data to be sent to obtain a message code, and a final issued message is obtained based on the message code, so that the message code obtaining time is longer, and the issued message obtaining efficiency is lower. In the invention, the message management equipment directly utilizes the generator obtained by training to generate the issued message code in the preset format based on the first message structure topological graph corresponding to the message data to be sent, and does not need to manually convert the message data to be sent by an industrial technician, thereby reducing the conversion time and improving the acquisition efficiency of the issued message.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a message management device in a hardware operating environment according to an embodiment of the present invention.
In general, the message management device may be a message management server, also called a message platform, and includes: at least one processor 301, a memory 302 and a computer program stored on said memory and executable on said processor, said computer program being configured to implement the steps of the message generation method as described before.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. Processor 301 may also include an AI (Artificial Intelligence) processor for processing relevant message generation method operations such that the message generation method model may be trained autonomously for learning, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the message generation methods provided by method embodiments herein.
In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, the processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or above the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, the front panel of the electronic device; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The power supply 306 is used to power various components in the electronic device. The power source 306 may be alternating current, direct current, disposable or rechargeable. When power source 306 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology. Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the message management apparatus and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the message generation method as described above. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. It is determined as an example that the program instructions may be deployed to be executed on one message management device or on multiple message management devices located at one site or distributed across multiple sites and interconnected by a communication network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The computer-readable storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Based on the above hardware structure, an embodiment of the message generation method of the present invention is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a message generating method of the present invention, applied to a message management device, where the method includes the following steps:
step S11: message data to be transmitted is received.
It should be noted that the execution subject of the method of the present invention is a message management device, the message management device may be a message management server, also called an information platform, the message management device is installed with a computer program, and when the message management device executes the computer program, the steps of the message generation method of the present invention are implemented.
In the present invention, the mobile terminal that transmits the message data to be transmitted is the first mobile terminal, and the mobile terminal that receives the converted information (i.e., the delivered message of the present invention) is the second mobile terminal. The mobile terminals transmit information through the message management equipment: the first mobile terminal sends information, the information is processed by the information management device, and the information management device sends the processed information to the second mobile terminal.
It can be understood that, in the present invention, the transmission mode of the information is similar to the existing transmission mode, the message data to be sent by the first mobile terminal may include information such as a phone number of the second mobile terminal, and the task management device may send the delivered message corresponding to the message data to be sent to the second mobile terminal through the information such as the phone number of the second mobile terminal in the message data to be sent.
Specifically, in the present invention, the message data to be sent may be 5G information, and the message data to be sent may be in the form of multimedia contents such as text, audio, video, card, location, and the like, and may also be in other types of forms, which is not limited in the present invention; in addition, the message data to be sent can be point-to-point messages, group sending messages, group chat messages, point-to-application messages and the like.
Step S12: and converting the message data to be sent into a first message structure topological graph.
Step S13: and inputting the first message structure topological graph into a generator obtained by training to generate a message sending code in a preset format.
It should be noted that the message data to be sent cannot be directly utilized by the generator obtained by training, and needs to be converted into a message structure topological graph, that is, the first message structure topological graph, which is input to the generator in the form of an adjacency matrix and a feature matrix.
In the present invention, the generator needs to be constructed first: the structure of the initial generator, i.e. the initial generator (mentioned below), is determined, and then the constructed initial generator is trained by using a training sample (first training sample) to obtain the trained generator, so as to implement step S12. The generator generates a message code, and it can be understood that the preset format in the present invention may be an xml format, that is, the issued message code is an xml code, and the generator generates a message other than a text message or a picture message.
Specifically, the initial generator includes a noise encoder, a topology encoder, a sequence decoder, a merging layer, and a first output layer. The noise encoder comprises three layers of long and short memory neural networks, each layer of long and short memory neural network is provided with 32 neurons, and an activation function is relu; the topology encoder comprises a first graph volume layer, a second graph volume layer and a third graph volume layer; the number of convolution kernels of the first graph convolution layer is 256, and the activation function is relu; the number of convolution kernels of the second graph convolution layer is 128, and the activation function is relu; the number of convolution kernels of the third graph convolution layer is 64, and the activation function is lambda; the sequence decoder comprises three layers of long-short term memory neural networks, the number of neurons of each layer of long-short term memory neural network is set to be 128, and an activation function is set to be relu; the activation function of the first output layer is softmax, and the number of fully-connected neurons of the first output layer is the same as the output dimension of the first result message code. The structure is the structure of the constructed initial generator.
The training mode of the generator in the invention is as follows: acquiring a first training sample, wherein the first training sample comprises first historical to-be-sent message data and a first real message code corresponding to the first historical to-be-sent message data; converting the first historical message data to be sent into a second message structure topological graph; and training an initial generator by utilizing the second message structure topological graph, the first real message code and the discriminant obtained by training so as to obtain the generator.
It should be noted that, in the present invention, a discriminator (initial discriminator) is constructed first, and then the constructed initial discriminator is trained by using a training sample (second training sample) to obtain the discriminator, so as to continuously implement the step of training the initial generator by using the second message structure topology, the first real message code and the discriminator obtained by training, so as to obtain the generator.
The training mode of the discriminator is as follows: acquiring a second training sample, wherein the second training sample comprises second historical to-be-sent message data and a second real message code corresponding to the second historical to-be-sent message data; converting the second historical message data to be sent into a third message structure topological graph; and training an initial discriminator by utilizing the third message structure topological graph, the second real message code and the initial generator to obtain the discriminator.
It can be understood that, in the training process, before training the initial generator and the initial arbiter, the corresponding historical message data to be sent needs to be converted into a message structure topological graph.
Specifically, the step of training an initial discriminator by using the third message structure topological graph, the second real message code, and the initial generator to obtain the discriminator includes: determining a first selected message structure topological graph in the third message structure topological graph, and determining a first selected message code corresponding to the first selected message structure topological graph in the second real message code; obtaining a first selected random noise corresponding to the first selected message structure topological graph; generating a first resulting message code with the initial generator based on the first selected message structure topology map and the first selected random noise; merging a first set of information pairs and a second set of information pairs to obtain a third set of information pairs, wherein the first selected message structure topological graph and the first selected message code form information pairs in the first set of information, and the first selected message structure topological graph and the first result message code form information pairs in the second set of information pairs; assigning values to each information pair in the first information pair set, each information pair in the second information pair set and each information pair in the third information pair set respectively; obtaining a first target parameter based on a first target function, the assignment of each information pair in the first information pair set, the assignment of each information pair in the second information pair set and the assignment of each information pair in the third information pair set; updating the initial discriminator by using the first target parameter to obtain a new discriminator; and taking the new discriminator as the initial discriminator, and returning to execute the step of determining a first selected message structure topological graph from the third message structure topological graph until a first target parameter meets a first preset condition to obtain the discriminator.
Firstly, samples used for training an initial discriminator, namely the second training samples, are collected, messages included in the second training samples are the second historical message data to be sent (messages before conversion), message codes included in the second training samples are the second real message codes, one piece of second historical message data to be sent corresponds to one second real message code, and the second training samples include a large amount of second historical message data to be sent and a large amount of second real message codes. The second training sample may be historical to-be-sent message data collected in the message management device and a real message code corresponding to the historical to-be-sent message data.
It is understood that the process of training the initial discriminator is to divide the second training samples into a plurality of batches, and perform a plurality of training processes using the second training samples of the plurality of batches to obtain the discriminator. It is understood that, in the second training sample, the first selected message structure topology and the first selected message code are training samples of one batch. Because one message data in the second historical message data to be sent corresponds to one second real message code, the third message structure topological graph corresponding to the message also corresponds to the real message code. It will be appreciated that the first selected message structure topology has a one-to-one correspondence with the first selected message code.
The message data to be sent (relating to the message to be sent, the first history message data to be sent, and the second history message data to be sent) often includes a header (the header may relate to a first-level header, a second-level header, a third-level header, and the like) and a body. It can be understood that in the message structure topological graph (referring to the first message structure topological graph, the second message structure topological graph, the third message structure topological graph, the first selected message structure topological graph and the second selected message structure topological graph in the present invention), the headers and the texts are two types of heterogeneous nodes respectively: the method comprises title nodes and text nodes, wherein the relationship between the nodes is the edge of the graph.
Referring to fig. 3, fig. 3 is a schematic structural diagram of message data to be sent according to the present invention, and in fig. 3, one message data to be sent includes one primary header, three secondary headers, two tertiary headers, and a text, where two secondary headers correspond to the text respectively, one secondary header corresponds to two tertiary headers, and two tertiary headers correspond to the text respectively, and then, a topological graph of a message structure corresponding to the message data to be sent includes 10 nodes and 9 edges.
In a specific application, the message structure topology is represented by G = (V, E), where V is a set of nodes and E is a set of edges. And then converting the message structure topological graph into an adjacency matrix and a feature matrix, and inputting the adjacency matrix and the feature matrix into a generator (which can be a generator obtained by training or an initial generator). The adjacency matrix represents the logical relationship between nodes, and has the following form:
Figure BDA0003101755730000121
wherein e is ij The logical relationship of the ith node and the jth node is represented and can comprise parallel, primary and secondary and total equal relationships, and N represents the total number of the nodes. The feature matrix represents feature sequences of each title node and each text node, the title features include title grades and title contents, the title grades can relate to first-level titles, second-level titles, third-level titles and the like, the features of the text nodes are text contents, and the form of the feature matrix is as follows:
Figure BDA0003101755730000122
wherein, B ij And (4) representing the jth sequence in the feature description of the ith node, wherein the total length of the feature sequences of each node is filled with M. The characteristic sequences of each node can be obtained through the established characteristic information dictionary, and the corresponding characteristic information dictionaries are different for the characteristic sequences with different total filling lengths. The feature information dictionary can be utilized to directly inquire the feature sequence corresponding to each node based on the content of each node.
It can be understood that, in the present invention, the real message codes (relating to the first real message code and the second real message code) corresponding to the historical to-be-sent message data (relating to the first historical to-be-sent message data and the second historical to-be-sent message data) actually exist in a manner of making the real issued message, and the real issued message needs to be generated into the real message code (for example, xml code). The longest length of the real message code can be set as K (natural number), where K is the index length of the real message code, and the shape of the historical real message code is N × K, where N is the total number of the nodes; the method can be implemented by converting the content of the real issued message corresponding to the historical message data to be sent into the real message code by using a code conversion dictionary, wherein the size of the code conversion dictionary corresponds to the longest length K.
Wherein said step of generating a first resulting message code with said initial generator based on said first selected message structure topology map and said first selected random noise comprises: inputting the first selected random noise to a noise encoder in the initial generator to obtain a first feature vector; inputting a first selected message structure topology map into a topology encoder in the initial generator to obtain a second feature vector; inputting the first feature vector and the second feature vector into a merging layer in the initial generator to obtain a merged vector; inputting the merged vector into a sequence decoder in the initial generator to obtain an output message code sequence; inputting the output message code sequence into a first output layer in the initial generator to obtain a first resulting message code.
Referring to FIG. 4, FIG. 4 is a schematic diagram of the data processing flow of the initial generator of the present invention; in fig. 4, the generation of the first result message code is explained as an example. The first selected message structure topological graph is the message topological structure (c), the first selected random noise is the noise U, the upper part (three layers of long and short term memory neural networks) of the encoder is a noise encoder, the lower part (three layers of graph convolution layers) of the encoder is a topological encoder, wherein the eigenvector U is the first eigenvector, the eigenvector Z is the second eigenvector, the generator comprises a sequence decoder (comprising three layers of long and short term memory neural networks), the output 5G message format xml code is the first result message code, and the code format is xml.
In specific application, the third message structure topological graph is input in a characteristic matrix and an adjacent matrix mode and is input through an input layer of an initial generator; the feature matrix and the adjacency matrix are then subjected to a three-layer graph convolutional layer process, each represented as follows:
H (l+1) =f(H l ,A)
wherein H 0 That is, data input to the input layer, a is the adjacency matrix, l is the number of layers of the map convolution layer (l =2, indicating that the map convolution layer is in the second layer), and different models are determined by selecting different mapping functions f and parameters, and in the present application, the mapping functions f are as follows:
Figure BDA0003101755730000131
where σ is the activation function relu, A is the adjacency matrix, D is the diagonal matrix of A, W l The parameters of the layer map convolution layer are shown.
The method comprises the following steps: the feature matrix and the adjacency matrix are input to a topology encoder, which outputs a first feature vector representing a potential spatial vector of the first selected message topology.
The second selected random noise is random noise corresponding to the first selected message structure topological graph, and the random noise may be determined by normal distribution, and one first selected message structure topological graph corresponds to one random noise.
And then combining the first feature vector and the second feature vector of the two branches, and extracting features from the first feature vector and the second feature vector in the combined vector by a sequence decoder to generate a message code sequence, namely the output message code sequence. The output message code sequence is then input to a first output layer in the initial generator to obtain a first resulting message code. Wherein the first output layer may be a sense fully connected layer.
To this end, two information pairs are obtained, the first information pair being formed by the first selected message structure topology and the first selected message code, all the first information pairs forming a first information pair set, the second information pair being formed by the first selected message structure topology and the first resulting message code, all the second information pairs forming a second information pair set. Wherein, the generated message code is a first result message code of a first selected message structure topological graph. The two information pairs each include a plurality of information pairs, for example, if the first selected information and the first selected code are E, the corresponding first information pair is E, and meanwhile, the second information pair also has E, then the first information pair and the second information pair are mixed, and the E information pairs are taken out from the mixed information pairs as a third information pair set.
The value of each information pair in the first information pair set is 1 (true), and the true message code is represented as the most accurate result; and the assignment of each information pair in the second information pair set and the assignment of each information pair in the third information pair set are both 0 (false), which indicates that the message code generated by the generator at the moment is false, the accuracy is low, and the result is the most inaccurate result.
Specifically, the initial arbiter includes a plurality of fully-connected layers and a second output layer (the second output layer may be a sense fully-connected layer), where each fully-connected layer includes 64 neurons, an activation function of each fully-connected layer is relu, the second output layer includes one fully-connected neuron, and the activation function is sigmoid. And after the Sigmoid outputs a result, sending a poor entropy loss function to obtain target parameters (a first target parameter and a second target parameter).
Wherein, the first objective function in the present application is as follows:
Figure BDA0003101755730000141
wherein the content of the first and second substances,
Figure BDA0003101755730000142
for the loss function value of the initial generator, i.e. the first objective parameter, E is the number of information pairs in the set (the number of information pairs in the set of three information pairs is the same), D (c) i ,x i ) For the assignment of each information pair in the first set of information pairs,
Figure BDA0003101755730000143
for the assignment of each information pair in the second set of information pairs,
Figure BDA0003101755730000144
and assigning a value to each information pair in the third information pair set. Updating the parameters of the initial discriminator by using the following formula:
Figure BDA0003101755730000145
wherein, theta d Is a parameter of the initial discriminator, theta d1 As the parameters of the new discriminator, the new discriminator is the initial discriminator after the parameters are updated, eta 1 Is the learning efficiency of the initial arbiter.
The first preset condition is that the first target parameter is converged, that is, for the initial discriminators trained by the second training samples of one batch, the change of the value of the first target parameter is very small compared with the initial discriminators trained by the second training samples of the previous batches, and the first target parameter value is relatively stable when the first target parameter value is trained for the last times. And then taking the initial arbiter trained by the second training sample of the last batch as the arbiter.
Further, specifically, the step of training an initial generator by using the second message structure topological graph, the first real message code, and a discriminant obtained by training to obtain the generator includes: determining a second selected message structure topological graph in the second message structure topological graph, and determining a second selected message code corresponding to the second selected message structure topological graph in the first real message code; obtaining a second selected random noise corresponding to the second selected message structure topology map; generating a second resulting message code with the initial generator based on the second selected message structure topology map and the second selected random noise; coding the second result message into the discriminator to obtain a judgment result; obtaining a second target parameter based on a second target function and the judgment result; updating the initial generator with the second target parameter to obtain a new generator; and taking the new generator as the initial generator, and returning to execute the step of determining a second selected message structure topological graph from the second message structure topological graph until a second target parameter meets a second preset condition to obtain the generator.
It is understood that, for the steps before generating the second result message code, reference is made to the training process of the arbiter above, and the details are not repeated here. The above first training sample and the intermediate data corresponding to the first training sample may be directly used as the intermediate data corresponding to the second training sample and the second training sample, for example, the first result message code corresponding to the first training sample is used as the second result message code, a step before generating the result message code is omitted, and the generated data is directly used for training.
After the discriminator is obtained, the discriminator is used to judge, that is, assign a value to the second result message code pair, if the value of the message code (second result message code) generated by the initial generator is 0, the accuracy of the initial generator is low, and if the value of the message code generated by the initial generator is 1, the accuracy is high, but the value corresponding to the initial generator is usually 0. Specifically, the second objective function is as follows:
Figure BDA0003101755730000161
wherein the content of the first and second substances,
Figure BDA0003101755730000162
for the second target parameter, F is the number of training samples of a batch, i.e. the number of second selected message structure topology maps, which correspond to a second selected message code, G (c) i ,z i ) For any one of the second resulting message codes, D (G (c) i ,z i ) Is an initial discriminator pair G (C) i ,z i ) The value of (c). Updating the parameters of the initial arbiter by using the following formula:
Figure BDA0003101755730000163
wherein, theta g Is a parameter of the initial generator, θ g1 For the parameters of the new generator, which is the initial generator after the parameters are updated, η 2 Is the learning efficiency of the initial generator.
The second preset condition is that the second target parameter is converged, that is, for the initial generator after the training of the first training sample of one batch, the change of the value of the second target parameter is very small compared with the initial generator after the training of the first training samples of the previous batches, and the second target parameter value is relatively stable when the second target parameter value is trained for the last times. And then taking the initial generator after the first training sample of the last batch is trained as the generator.
It can be understood that, for the step of inputting the experience message structure topological graph into the generator and obtaining the lower tie message code, the step is similar to the step of obtaining the first result message code by using the first selected message structure topological graph and the first selected random noise in the training process, and details are not repeated here.
Referring to fig. 5, fig. 5 is a schematic diagram of a training process of an initial generator and an initial arbiter according to the present invention, in fig. 5, a graph condition generating network is a network formed by the initial generator and the initial arbiter, where a graph condition is a message structure topological diagram added to the structures of the initial generator and the initial arbiter.
Training process of the initial discriminator: and (3) generating a result information code x by using the initial generator based on a message structure topological graph c (message structure topology) corresponding to the message to be sent in the training sample, and training the initial discriminator by using the result information code and the message structure topological graph to obtain the trained discriminator, wherein the specific training process refers to the above and is not repeated here.
Training process of the initial generator: the method comprises the steps of generating a result information code x by using an initial generator based on a message structure topological graph c (message structure topology) corresponding to a message to be sent in a training sample, judging the result information code by using a trained discriminator (true output is 1, false output is 0) to obtain an output judgment result, and finishing training of the initial generator by using an output judgment result D (c, x) as a training return of parameters of the initial generator.
Step S14: and generating a distributed message based on the distributed message code.
After the down message code is obtained, the down message code is converted into a specific down message to be sent.
The invention provides a message generation method, which is applied to message management equipment and comprises the following steps: receiving message data to be sent; converting the message data to be sent into a first message structure topological graph; inputting the first message structure topological graph into a generator obtained by training to generate a message sending code in a preset format; and generating a distributed message based on the distributed message code.
In the existing message generation method, an industry technician is required to manually convert message data to be sent to obtain a message code, and a final issued message is obtained based on the message code, so that the message code obtaining time is longer, and the issued message obtaining efficiency is lower. In the invention, the message management equipment directly utilizes the generator obtained by training to generate the delivered message code in the preset format based on the first message structure topological graph corresponding to the message data to be sent, and does not need to manually convert the message data to be sent by an industrial technician, thereby reducing the conversion time and improving the acquisition efficiency of the delivered message.
Referring to fig. 6, fig. 6 is a block diagram of a first embodiment of a message generating apparatus according to the present invention, the apparatus is used for scheduling devices, and based on the same inventive concept as the foregoing embodiment, the apparatus includes:
a receiving module 10, configured to receive message data to be sent;
a first conversion module 20, configured to convert the message data to be sent into a first message structure topology map;
a generating module 30, configured to input the first message structure topological graph into a generator obtained through training, so as to generate an issued message code in a preset format;
and a second conversion module 40, configured to generate a delivered message based on the delivered message code.
It should be noted that, since the steps executed by the apparatus of this embodiment are the same as the steps of the foregoing method embodiment, the specific implementation and the achievable technical effects thereof can refer to the foregoing embodiment, and are not described herein again.
The above description is only an alternative embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, which are within the spirit of the present invention, are included in the scope of the present invention.

Claims (10)

1. A method of message generation, the method comprising the steps of:
receiving message data to be sent;
converting the message data to be sent into a first message structure topological graph;
inputting the first message structure topological graph into a generator obtained by training to generate a message sending code in a preset format;
and generating a distributed message based on the distributed message code.
2. The method of claim 1, wherein prior to the step of receiving message data to be transmitted, the method further comprises:
acquiring a first training sample, wherein the first training sample comprises first historical message data to be sent and a first real message code corresponding to the first historical message data to be sent;
converting the first historical to-be-sent message data into a second message structure topological graph;
and training an initial generator by utilizing the second message structure topological graph, the first real message code and the discriminant obtained by training so as to obtain the generator.
3. The method of claim 2, wherein prior to the step of training an initial generator to obtain the generator using the second message structure topology, the first real message code, and a trained arbiter, the method further comprises:
acquiring a second training sample, wherein the second training sample comprises second historical message data to be sent and a second real message code corresponding to the second historical message data to be sent;
converting the second historical message data to be sent into a third message structure topological graph;
and training an initial discriminator by utilizing the third message structure topological graph, the second real message code and the initial generator to obtain the discriminator.
4. The method of claim 3, wherein the step of training an initial arbiter using the third message structure topology, the second real message code, and the initial generator to obtain the arbiter comprises:
determining a first selected message structure topological graph in the third message structure topological graph, and determining a first selected message code corresponding to the first selected message structure topological graph in the second real message code;
obtaining a first selected random noise corresponding to the first selected message structure topology map;
generating a first resulting message code with the initial generator based on the first selected message structure topology map and the first selected random noise;
merging a first information pair set and a second information pair set to obtain a third information pair set, wherein the first selected message structure topological graph and the first selected message code form an information pair in the first information set, and the first selected message structure topological graph and the first result message code form an information pair in the second information pair set;
assigning values to each information pair in the first information pair set, each information pair in the second information pair set and each information pair in the third information pair set respectively;
obtaining a first target parameter based on a first target function, the assignment of the first information pair to each information pair in the set, the assignment of the second information pair to each information pair in the set and the assignment of the third information pair to each information pair in the set;
updating the initial discriminator by using the first target parameter to obtain a new discriminator;
and taking the new discriminator as the initial discriminator, and returning to execute the step of determining a first selected message structure topological graph from the third message structure topological graph until a first target parameter meets a first preset condition to obtain the discriminator.
5. The method of claim 4, wherein the step of training an initial generator to obtain the generator using the second message structure topology, the first real message code, and a trained arbiter comprises:
determining a second selected message structure topological graph in the second message structure topological graph, and determining a second selected message code corresponding to the second selected message structure topological graph in the first real message code;
obtaining a second selected random noise corresponding to the second selected message structure topology map;
generating a second resulting message code with the initial generator based on the second selected message structure topology map and the second selected random noise;
coding the second result message into the discriminator to obtain a judgment result;
obtaining a second target parameter based on a second target function and the judgment result;
updating the initial generator with the second target parameter to obtain a new generator;
and taking the new generator as the initial generator, and returning to execute the step of determining a second selected message structure topological graph from the second message structure topological graph until a second target parameter meets a second preset condition to obtain the generator.
6. The method of claim 5, wherein said step of generating a first resulting message code with said initial generator based on said first selected message structure topology map and said first selected random noise comprises:
inputting the first selected random noise into a noise encoder in the initial generator to obtain a first feature vector;
inputting a first selected message structure topology map into a topology encoder in the initial generator to obtain a second feature vector;
inputting the first feature vector and the second feature vector into a merging layer in the initial generator to obtain a merged vector;
inputting the merged vector into a sequence decoder in the initial generator to obtain an output message code sequence;
inputting the output message code sequence into a first output layer in the initial generator to obtain a first resulting message code.
7. The method of claim 6,
the noise encoder comprises three layers of long and short memory neural networks, each layer of long and short memory neural network is provided with 32 neurons, and an activation function is relu;
the topology encoder comprises a first graph convolution layer, a second graph convolution layer and a third graph convolution layer; the number of convolution kernels of the first graph convolution layer is 256, and the activation function is relu; the number of convolution kernels of the second graph convolution layer is 128, and the activation function is relu; the number of convolution kernels of the third graph convolution layer is 64, and the activation function is lambda;
the sequence decoder comprises three layers of long-short term memory neural networks, the number of neurons of each layer of long-short term memory neural network is set to be 128, and an activation function is set to be relu;
the activation function of the first output layer is softmax, and the number of fully-connected neurons of the first output layer is the same as the output dimension of the first result message code;
the initial arbiter comprises a plurality of fully-connected layers and a second output layer, wherein each fully-connected layer comprises 64 neurons, an activation function of each fully-connected layer is relu, the second output layer comprises one fully-connected neuron, and the activation function of the second output layer is sigmoid.
8. An apparatus for message generation, the apparatus comprising:
the receiving module is used for receiving message data to be sent;
the first conversion module is used for converting the message data to be sent into a first message structure topological graph;
the generating module is used for inputting the first message structure topological graph into a generator obtained by training so as to generate a transmitted message code in a preset format;
and the second conversion module is used for generating the down-sending message based on the down-sending message code.
9. A message management device, characterized in that the message management device comprises: memory, processor and computer program stored on the memory and running on the processor, which when executed by the processor implements the steps of the message generation method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the message generation method according to any one of claims 1 to 7.
CN202110629209.4A 2021-06-04 2021-06-04 Message generation method, device, message management equipment and storage medium Active CN115442324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110629209.4A CN115442324B (en) 2021-06-04 2021-06-04 Message generation method, device, message management equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110629209.4A CN115442324B (en) 2021-06-04 2021-06-04 Message generation method, device, message management equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115442324A true CN115442324A (en) 2022-12-06
CN115442324B CN115442324B (en) 2023-08-18

Family

ID=84271706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110629209.4A Active CN115442324B (en) 2021-06-04 2021-06-04 Message generation method, device, message management equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115442324B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689531B1 (en) * 2005-09-28 2010-03-30 Trend Micro Incorporated Automatic charset detection using support vector machines with charset grouping
US20170287465A1 (en) * 2016-03-31 2017-10-05 Microsoft Technology Licensing, Llc Speech Recognition and Text-to-Speech Learning System
CN108648135A (en) * 2018-06-01 2018-10-12 深圳大学 Hide model training and application method, device and computer readable storage medium
US20190155905A1 (en) * 2017-11-17 2019-05-23 Digital Genius Limited Template generation for a conversational agent
CN109885667A (en) * 2019-01-24 2019-06-14 平安科技(深圳)有限公司 Document creation method, device, computer equipment and medium
CN111865752A (en) * 2019-04-23 2020-10-30 北京嘀嘀无限科技发展有限公司 Text processing device, method, electronic device and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689531B1 (en) * 2005-09-28 2010-03-30 Trend Micro Incorporated Automatic charset detection using support vector machines with charset grouping
US20170287465A1 (en) * 2016-03-31 2017-10-05 Microsoft Technology Licensing, Llc Speech Recognition and Text-to-Speech Learning System
US20190155905A1 (en) * 2017-11-17 2019-05-23 Digital Genius Limited Template generation for a conversational agent
CN108648135A (en) * 2018-06-01 2018-10-12 深圳大学 Hide model training and application method, device and computer readable storage medium
CN109885667A (en) * 2019-01-24 2019-06-14 平安科技(深圳)有限公司 Document creation method, device, computer equipment and medium
CN111865752A (en) * 2019-04-23 2020-10-30 北京嘀嘀无限科技发展有限公司 Text processing device, method, electronic device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHONG MENG: "《Adversarial Speaker Adaptation》", 《 ICASSP 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *
李凯伟,马力: "《基于生成对抗网络的情感对话回复生成》", 《计算机工程与应用》 *

Also Published As

Publication number Publication date
CN115442324B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110598213A (en) Keyword extraction method, device, equipment and storage medium
US20180213077A1 (en) Method and apparatus for controlling smart device, and computer storage medium
CN110069715A (en) A kind of method of information recommendation model training, the method and device of information recommendation
US20220374776A1 (en) Method and system for federated learning, electronic device, and computer readable medium
CN113792851B (en) Font generation model training method, font library building method, font generation model training device and font library building equipment
CN111222647A (en) Federal learning system optimization method, device, equipment and storage medium
US20230186607A1 (en) Multi-task identification method, training method, electronic device, and storage medium
CN111753498B (en) Text processing method, device, equipment and storage medium
CN113902010A (en) Training method of classification model, image classification method, device, equipment and medium
CN114548416A (en) Data model training method and device
CN113392197A (en) Question-answer reasoning method and device, storage medium and electronic equipment
CN114004905B (en) Method, device, equipment and storage medium for generating character style pictogram
CN111814044B (en) Recommendation method, recommendation device, terminal equipment and storage medium
CN113723607A (en) Training method, device and equipment of space-time data processing model and storage medium
CN115442324B (en) Message generation method, device, message management equipment and storage medium
CN109754319B (en) Credit score determination system, method, terminal and server
CN116521832A (en) Dialogue interaction method, device and system, electronic equipment and storage medium
CN112200198B (en) Target data feature extraction method, device and storage medium
CN114996578A (en) Model training method, target object selection method, device and electronic equipment
CN114528893A (en) Machine learning model training method, electronic device and storage medium
CN110442633A (en) Structural data generation method and device, storage medium and electronic equipment
EP4318375A1 (en) Graph data processing method and apparatus, computer device, storage medium and computer program product
CN115994668B (en) Intelligent community resource management system
CN114239608B (en) Translation method, model training method, device, electronic equipment and storage medium
CN113051379B (en) Knowledge point recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant