CN116030296A - Social platform data mining method and system for graphic data collaboration - Google Patents

Social platform data mining method and system for graphic data collaboration Download PDF

Info

Publication number
CN116030296A
CN116030296A CN202211440379.9A CN202211440379A CN116030296A CN 116030296 A CN116030296 A CN 116030296A CN 202211440379 A CN202211440379 A CN 202211440379A CN 116030296 A CN116030296 A CN 116030296A
Authority
CN
China
Prior art keywords
data
text
image
understanding
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211440379.9A
Other languages
Chinese (zh)
Inventor
张寒冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Dongyu Intelligent Technology Co ltd
Original Assignee
Jiangxi Dongyu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Dongyu Intelligent Technology Co ltd filed Critical Jiangxi Dongyu Intelligent Technology Co ltd
Priority to CN202211440379.9A priority Critical patent/CN116030296A/en
Publication of CN116030296A publication Critical patent/CN116030296A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a social platform data mining method and a system based on graphic data collaboration, which combine image data and text data in social data of users to construct an emotion pattern recognition scheme based on social platform data mining. Specifically, firstly, crawling text data and image data in user social data published by a social platform through crawler software, then, performing feature extraction on the text data through a semantic encoder and a text understanding model to obtain a text understanding image, performing data-level association on the text understanding image and the image data, obtaining a classification feature map through a convolutional neural network model with excellent performance, and finally, passing the classification feature map through a multi-label classifier to obtain a classification result. By the method, the identification results of the text data and the image data can be accurately obtained, and the accuracy of emotion pattern identification based on social platform data mining is improved.

Description

Social platform data mining method and system for graphic data collaboration
Technical Field
The application relates to the field of intelligent data processing, and in particular relates to a social platform data mining method and system based on graphic data collaboration.
Background
With the rapid development of mobile internet technology and social media platforms, the number of network users is continuously increased, and a large number of network users release their moods, states, views, evaluations and the like on a series of social media platforms every day to form various forms of data information, such as text, images, videos and other data information in various modes.
User information with emotion value is accurately identified from social media data generated by users, hot events on a network can be effectively analyzed, and the emotion tendency or emotion identification of netizens through the expression of a network social platform is monitored and controlled. However, with the development of internet technology and the popularization of social media, the expression of emotion and the evaluation of things by users in social media are not limited to text, and people increasingly like to express personal views, comment on certain events and emotion expression on social media in the form of data such as text and pictures. The information that the image is full of emotion in the social media is richer than the text, but if only the emotion of the image is considered, the background information of the context is ignored, so that emotion recognition is inaccurate.
Accordingly, an emotion pattern analysis scheme based on social platform data is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a social platform data mining method and a system based on graphic data collaboration, which combine image data and text data in social data of users to construct an emotion pattern recognition scheme based on social platform data mining. Specifically, firstly, crawling text data and image data in user social data published by a social platform through crawler software, then, performing feature extraction on the text data through a semantic encoder and a text understanding model to obtain a text understanding image, performing data-level association on the text understanding image and the image data, obtaining a classification feature map through a convolutional neural network model with excellent performance, and finally, passing the classification feature map through a multi-label classifier to obtain a classification result. By the method, the identification results of the text data and the image data can be accurately obtained, and the accuracy of emotion pattern identification based on social platform data mining is improved.
According to one aspect of the application, there is provided a social platform data mining method based on graphic data collaboration, which includes:
acquiring user social data, wherein the user social data comprises image data and text data;
after word segmentation is carried out on text data in the social data of the user, each word in the text data is converted into a word embedding vector through a word embedding layer so as to obtain a sequence of the word embedding vector;
interpolation is carried out on each word embedding vector in the sequence of word embedding vectors so as to obtain a sequence of word embedding enhancement vectors;
the word embedding enhancement vector sequence passes through a semantic encoder to obtain a text semantic understanding feature vector;
the text semantic understanding feature vector passes through a text understanding model to obtain a text understanding image;
performing sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image;
combining the optimized text understanding image with the image data in the user social data to obtain multi-channel image data;
passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map; and
And the classification feature map passes through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belong.
In the social platform data mining method based on graphic data collaboration, the semantic encoder is a context encoder based on a converter; wherein the embedding the sequence of words into the enhancement vector through the semantic encoder to obtain a text semantic understanding feature vector comprises: performing global-based context semantic coding on the sequence of word-embedded enhancement vectors using the converter-based context encoder to obtain a plurality of semantic feature vectors; and concatenating the plurality of semantic feature vectors to obtain the text semantic understanding feature vector.
In the social platform data mining method based on graphic data collaboration, the semantic encoder is a two-way long-short-term memory neural network model.
In the social platform data mining method based on graphic data collaboration, the performing sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image includes: calculating a difference value between the text understanding image and the characteristic value of the text understanding image to obtain a first difference value; calculating a difference value between the first value and the average value of all the characteristic values of the text understanding image to obtain a second difference value; calculating a logarithmic function value based on two after dividing the characteristic value of the text understanding image by the average value of all the characteristic values of the text understanding image to obtain a first logarithmic value; calculating the first difference value divided by the second difference value, and then calculating a logarithmic function value based on two to obtain a second logarithmic value; and calculating a sum of the feature value of the text understanding image multiplied by the sum of the first pair of values and the first difference multiplied by the second pair of values to obtain the feature value of the optimized text understanding image
In the social platform data mining method based on graphic data collaboration, the performing sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image includes: performing sparsity implicit limiting factor correction on the text understanding image with the following formula to obtain the optimized text understanding image; the formula is:
Figure BDA0003947995250000031
wherein m is i,j Is the feature value of the text understanding image,
Figure BDA0003947995250000032
is the average of all feature values of the text understanding image, and m i,j Is the eigenvalue of the optimized text understanding image.
In the social platform data mining method based on graphic data collaboration, the step of obtaining the classification feature map by using a first convolutional neural network of an efficient attention mechanism for the multi-channel image data includes: passing the multi-channel image data through multiple convolutional layers of the first convolutional neural network to output a high-dimensional feature map from a last convolutional layer of the multiple convolutional layers; carrying out global mean pooling on each feature matrix of the high-dimensional feature map to obtain a channel feature vector; carrying out one-dimensional convolution coding on the channel feature vectors to obtain inter-channel correlation feature vectors; inputting the inter-channel associated feature vector into a Sigmoid activation function to obtain a probabilistic inter-channel associated feature vector; and weighting each feature matrix of the high-dimensional feature map by taking the feature value of each position in the probability inter-channel associated feature vector as a weight to obtain the classification feature map.
In the social platform data mining method based on graphic data collaboration, the passing the multi-channel image data through the multi-layer convolution layer of the first convolution neural network to output a high-dimensional feature map by the last convolution layer of the multi-layer convolution layer includes: each convolution layer in the multi-layer convolution layers of the first convolution neural network respectively carries out convolution processing, pooling processing and nonlinear activation processing on input data in a forward transmission process so as to output the high-dimensional feature map by the last convolution layer in the multi-layer convolution layers.
In the social platform data mining method based on graphic data collaboration, the step of passing the classification feature map through a multi-label classifier to obtain a classification result comprises the following steps: processing the classification feature map by using the multi-label classifier in the following formula to obtain the classification result; wherein, the formula is: softmax { (W) n ,B n ):…:(W 1 ,B 1 ) Project (F), where Project (F) represents projecting the classification feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias matrix for each fully connected layer.
According to another aspect of the present application, there is provided a social platform data mining system based on teletext data collaboration, comprising:
The social data acquisition module is used for acquiring user social data, wherein the user social data comprises image data and text data;
the text data embedding module is used for word segmentation of text data in the user social data and then converting each word in the text data into a word embedding vector through the word embedding layer so as to obtain a sequence of the word embedding vector;
the interpolation module is used for interpolating each word embedding vector in the sequence of word embedding vectors to obtain a sequence of word embedding enhancement vectors;
the semantic coding module is used for embedding the word into the sequence of the enhancement vector and obtaining a text semantic understanding feature vector through a semantic encoder;
the text understanding module is used for enabling the text semantic understanding feature vector to pass through a text understanding model to obtain a text understanding image;
the text understanding optimization module is used for carrying out sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image;
the multi-channel synthesis module is used for merging the optimized text understanding image with the image data in the user social data to obtain multi-channel image data;
a convolutional encoding module for passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map; and
And the emotion label generation module is used for passing the classification feature map through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belong.
Compared with the prior art, the social platform data mining method and the system based on the graphic data collaboration, which are provided by the application, combine the image data and the text data in the social data of the user to construct an emotion pattern recognition scheme based on the social platform data mining. Specifically, firstly, crawling text data and image data in user social data published by a social platform through crawler software, then, performing feature extraction on the text data through a semantic encoder and a text understanding model to obtain a text understanding image, performing data-level association on the text understanding image and the image data, obtaining a classification feature map through a convolutional neural network model with excellent performance, and finally, passing the classification feature map through a multi-label classifier to obtain a classification result. By the method, the identification results of the text data and the image data can be accurately obtained, and the accuracy of emotion pattern identification based on social platform data mining is improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 illustrates an application scenario diagram of a social platform data mining method based on graph-text data collaboration according to an embodiment of the application.
FIG. 2 illustrates a flow chart of a social platform data mining method based on graph-text data collaboration in accordance with an embodiment of the present application.
Fig. 3 illustrates an architectural diagram of a social platform data mining method based on graph-text data collaboration according to an embodiment of the present application.
FIG. 4 illustrates a flow chart of sparsity implicit limiting factor correction of text understanding images to obtain optimized text understanding images in a social platform data mining method based on teletext data collaboration in accordance with an embodiment of the application.
FIG. 5 illustrates a flow chart of a method of social platform data mining based on graph-text data collaboration, where multi-channel image data is passed through a first convolutional neural network using an efficient attention mechanism to derive a classification feature map, according to an embodiment of the present application.
FIG. 6 illustrates a block diagram of a social platform data mining system based on teletext data collaboration, according to an embodiment of the application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Scene overview
Accordingly, in the technical scheme of the application, an emotion pattern recognition scheme based on social platform data mining is attempted to be constructed by combining image data and text data in user social data.
Specifically, user social data published by a user on a social platform is first crawled through crawler software. As described above, with the development of internet technology and the popularization of social media, the expression of emotion and the evaluation of things by users in social media are not limited to text, and people increasingly like to express personal views, comment on an event and express emotion on social media in the form of data such as text and pictures. That is, the user social data includes image data and text data.
For the text data, a semantic encoder may be used to extract high-dimensional underlying semantic information in the text data. However, considering that text data in social data of a user may be too concise, that is, the text data may lack context information, semantic understanding of the text data cannot be accurately performed when semantic information extraction is performed by using a semantic encoder, which affects the accuracy of final emotion topic analysis. Therefore, in the technical scheme of the application, after word embedding vector conversion is performed on text data in the user social data, data augmentation is performed on word embedding vectors corresponding to each word in an interpolation manner, and it should be understood that the data processing manner is essentially to add context information to the text data in the user social data so as to improve semantic understanding accuracy of the text data.
Specifically, firstly, word segmentation is carried out on text data in the social data of the user, and then each word in the text data is converted into a word embedding vector through a word embedding layer so as to obtain a sequence of the word embedding vector. Then, each word embedding vector in the sequence of word embedding vectors is interpolated to obtain a sequence of word embedding enhancement vectors. The sequence of words embedded in the enhancement vector is then passed through a semantic encoder to obtain a text semantic understanding feature vector. In the technical solution of the present application, the semantic encoder may be implemented as a context encoder based on a converter, a two-way long-short-term memory model, a long-short-term memory model, an RNN neural network, etc.
In particular, in embodiments of the present application, considering that the user often has a relationship between the picture data and the text data in the data published by the social platform, for example, the text data may describe objects in the image data, and the text data may express emotion conveyed by the image data. Therefore, in the technical scheme of the application, the text semantic understanding feature vector is passed through a text understanding model to obtain a text understanding image. In one specific example, the text understanding model is an countermeasure generator model.
The text understanding image is then combined with the image data in the user social data to obtain multi-channel image data, i.e. the understanding image obtained based on text understanding and the image data in the user social data are data-level correlated and a classification feature map is obtained by a convolutional neural network model with excellent performance in the field of image feature extraction. Then, the classification feature map is passed through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belongs
In particular, in the technical solution of the present application, as a text understanding image obtained by generating a model, since the endogenous nature of data interpolation for generating source data of the model is less expensive to be amplified and more focused on local semantics, it is desirable to further optimize the data expression capability of the text understanding image for global text semantics correspondingly thereto.
Thus, sparsity implicit limiting factor correction is performed on the text understanding image, expressed as:
Figure BDA0003947995250000071
m i,j is a characteristic value of the text understanding image, and
Figure BDA0003947995250000072
is the average of all feature values of the text understanding image.
Here, the sparsity implicit limiting factor correction performs sparsity constraint on the implicit expression of the feature through a KL-like divergence form to sparsely limit the parameter space of the model, so as to improve the average liveness of the activation units of the model parameters, which infer the expected characteristics during training, thereby improving the group optimization (swarm optimization) capability of the model and improving the data expression capability of the text understanding image on global text semantics.
Based on the above, the application provides a social platform data mining method based on graphic data collaboration, which comprises the following steps: acquiring user social data, wherein the user social data comprises image data and text data; after word segmentation is carried out on text data in the social data of the user, each word in the text data is converted into a word embedding vector through a word embedding layer so as to obtain a sequence of the word embedding vector; interpolation is carried out on each word embedding vector in the sequence of word embedding vectors so as to obtain a sequence of word embedding enhancement vectors; the word embedding enhancement vector sequence passes through a semantic encoder to obtain a text semantic understanding feature vector; the text semantic understanding feature vector passes through a text understanding model to obtain a text understanding image; performing sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image; combining the optimized text understanding image with the image data in the user social data to obtain multi-channel image data; passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map; and passing the classification feature map through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belong.
Fig. 1 illustrates an application scenario diagram of a social platform data mining method based on graph-text data collaboration according to an embodiment of the application. As shown in fig. 1, in this application scenario, user social data (e.g., M as illustrated in fig. 1) published by a user on a social platform, including text data (e.g., a as illustrated in fig. 1) and image data (e.g., B as illustrated in fig. 1), is first crawled by crawler software; inputting the text data and the image data into a server (for example, S as illustrated in fig. 1) deployed with a graph-text data-based collaborative algorithm, wherein the server processes the text data and the image data with the graph-text data-based collaborative algorithm to output and obtain a classification result, wherein the classification result is an emotion type label to which the user social data belongs.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary method
FIG. 2 illustrates a flow chart of a social platform data mining method based on graph-text data collaboration in accordance with an embodiment of the present application. As shown in fig. 2, a social platform data mining method based on graphic data collaboration according to an embodiment of the present application includes: s110, acquiring user social data, wherein the user social data comprises image data and text data; s120, after word segmentation is carried out on text data in the social data of the user, each word in the text data is converted into a word embedding vector through a word embedding layer so as to obtain a sequence of the word embedding vector; s130, interpolating each word embedding vector in the sequence of word embedding vectors to obtain a sequence of word embedding enhancement vectors; s140, embedding the word into the sequence of the enhancement vectors through a semantic encoder to obtain text semantic understanding feature vectors; s150, passing the text semantic understanding feature vector through a text understanding model to obtain a text understanding image; s160, sparse implicit limiting factor correction is carried out on the text understanding image so as to obtain an optimized text understanding image; s170, merging the optimized text understanding image with the image data in the user social data to obtain multi-channel image data; s180, the multi-channel image data is passed through a first convolution neural network using an efficient attention mechanism to obtain a classification characteristic diagram; and S190, passing the classification feature map through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belongs.
Fig. 3 illustrates an architecture diagram of a social platform data mining method based on graph-text data collaboration according to an embodiment of the application. As shown in fig. 3, in the network architecture of the social platform data mining method based on graphic data collaboration, first, user social data is obtained, wherein the user social data includes image data and text data; then, after word segmentation is carried out on text data in the social data of the user, each word in the text data is converted into a word embedding vector through a word embedding layer so as to obtain a sequence of the word embedding vector; then, interpolating each word embedding vector in the sequence of word embedding vectors to obtain a sequence of word embedding enhancement vectors; then, the word embedding enhancement vector sequence passes through a semantic encoder to obtain text semantic understanding feature vectors; then, the text semantic understanding feature vector passes through a text understanding model to obtain a text understanding image; then, sparsity implicit limiting factor correction is performed on the text understanding image to obtain an optimized text understanding image; then, merging the optimized text understanding image with the image data in the user social data to obtain multi-channel image data; then, the multi-channel image data is passed through a first convolution neural network using an efficient attention mechanism to obtain a classification characteristic map; and then, the classification feature map is passed through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belongs.
In step S110, user social data including image data and text data is acquired. For example, user social data published by a user at a social platform may be crawled by crawler software to obtain image data and text data in the user social data simultaneously. As described above, in the embodiment of the present application, considering that the image data and the text data in the data published by the social platform often have a correlation with each other, in the present application, the image data and the text data in the social data of the user are combined to construct the emotion pattern recognition scheme based on the data mining of the social platform, so that the accuracy of emotion pattern recognition based on the data mining of the social platform is higher.
In step S120, after word segmentation is performed on the text data in the social data of the user, each word in the text data is converted into a word embedding vector through a word embedding layer, so as to obtain a sequence of word embedding vectors. That is, in one specific example, first, the text data is word-segmented to obtain a plurality of words; next, each word of the plurality of words is input into the word embedding layer to convert each word into a word embedding vector to obtain a sequence of word embedding vectors. It will be appreciated that by constructing the current text data in the form of vectors, subsequent computer processing is facilitated.
In particular, the word segmentation (Word Segmentation) refers to a process of segmenting a Chinese character sequence into individual words, that is, recombining consecutive word sequences into word sequences according to a certain specification. Specifically, in a specific example of the present application, an understanding-based word segmentation method may be selected, where the understanding-based word segmentation method achieves the effect of recognizing words by letting a computer simulate understanding of sentences by a person. In another specific example of the present application, a word segmentation method based on statistics may also be selected, where the word segmentation method based on statistics is to learn a word segmentation rule (called training) by using a statistical machine learning model on the premise of giving a large number of segmented texts, so as to achieve segmentation of unknown texts.
In step S130, each word embedding vector in the sequence of word embedding vectors is interpolated to obtain a sequence of word embedding enhancement vectors. In the application, the problem that text data in social data of a user is too concise is considered, namely, the text data lacks context information, so that semantic understanding of the text data cannot be accurately performed when semantic information extraction is performed by using a semantic encoder, and the final emotion topic analysis accuracy is affected. Therefore, in the technical scheme of the application, after word embedding vector conversion is performed on text data in the user social data, data augmentation is performed on word embedding vectors corresponding to each word in an interpolation manner, and it should be understood that the data processing manner can add context information to the text data in the user social data so as to improve semantic understanding accuracy of the text data.
It is understood that the greater the data amount of text data, the higher the semantic understanding accuracy of the text data. In the method, an interpolation mode is adopted, the hidden states of the two sentences are interpolated to generate a new sentence, the new sentence contains the meanings of the original sentence and the new sentence, so that the data of word embedded vectors corresponding to each word is amplified, and context information is added to text data in social data of a user to improve the semantic understanding accuracy of the text data.
In step S140, the sequence of words embedded in the enhancement vector is passed through a semantic encoder to obtain a text semantic understanding feature vector. In the embodiment of the application, the semantic encoder can perform global-based semantic encoding on the sequence of the word embedding enhancement vectors to obtain the text semantic understanding feature vectors, so that the obtained text semantic understanding feature vectors can obtain global text association information. In particular, in the technical solution of the present application, the semantic encoder may be implemented as a context encoder based on a converter, a two-way long-short-term memory model, a long-short-term memory model, an RNN neural network, etc.
Specifically, in an example of the present application, the semantic encoder is a context encoder based on a converter, wherein the embedding the word into the sequence of enhancement vectors through the semantic encoder to obtain text semantic understanding feature vectors includes: firstly, performing global-based context semantic coding on the sequence of word embedded enhancement vectors by using the context encoder based on the converter to obtain a plurality of semantic feature vectors; and then cascading the plurality of semantic feature vectors to obtain the text semantic understanding feature vector.
Wherein the context encoder performs global-based context semantic encoding on the sequence of word embedding enhancement vectors using a Transformer (Transformer) -based Bert model, in particular, the Bert model performs global context encoding on each word embedding enhancement vector in the sequence of word embedding enhancement vectors with a global of the sequence of word embedding enhancement vectors as a semantic context based on an intrinsic mask structure of the Transformer to obtain the plurality of semantic feature vectors. Wherein one of the plurality of semantic feature vectors corresponds to one of the word embedding enhancement vectors.
Specifically, in another example of the present application, the semantic encoder is a two-way long-short-term memory neural network model, and the sequence of word-embedded enhancement vectors is input into the two-way long-term memory neural network model to obtain the text semantic understanding feature vector. Wherein one of the text semantic understanding feature vectors of the plurality of semantic feature vectors corresponds to a respective word embedding enhancement vector.
Those skilled in the art will appreciate that Long Short-Term Memory (LSTM) is a time-cycled neural network, which enables the weight of the neural network to update itself by adding an input gate, an output gate and a forgetting gate, and the weight scale at different moments can be dynamically changed under the condition of fixed parameters of a network model, so that the problem of gradient disappearance or gradient expansion can be avoided. The bidirectional long-short-term memory model is formed by combining a forward LSTM and a backward LSTM, the forward LSTM can learn the front information of the current word and the backward LSTM can learn the information of the text following the current word, so that the semantic feature vector obtained through the bidirectional long-short-term memory model learns the text vector context information.
In step S150, the text semantic understanding feature vector is passed through a text understanding model to obtain a text understanding image. In particular, in embodiments of the present application, considering that the user often has a relationship between the picture data and the text data in the data published by the social platform, for example, the text data may describe objects in the image data, and the text data may express emotion conveyed by the image data. Therefore, in the technical scheme of the application, the text semantic understanding feature vector is passed through a text understanding model to obtain a text understanding image.
And after the text data in the user social data are segmented, converting each word in the text data into a word embedding vector through a word embedding layer to obtain a word embedding vector sequence, then interpolating each word embedding vector in the word embedding vector sequence to obtain a word embedding enhancement vector sequence, and passing the word embedding enhancement vector sequence through a semantic encoder to obtain a text semantic understanding feature vector to generate a text understanding image.
Specifically, one or more text semantic understanding feature vectors are acquired first, and two-dimensional stitching is performed on the text semantic understanding feature vectors to obtain a text semantic understanding feature matrix. The text understanding image is then generated by a text understanding model, such as a countermeasure generator model. Notably, the image generated by the countermeasure generator model is able to reconstruct better detail, i.e., improve global consistency in areas that are challenging to convolutions, such as larger uniform areas.
In step S160, sparsity implicit limiting factor correction is performed on the text understanding image to obtain an optimized text understanding image. In particular, in the technical solution of the present application, as a text understanding image obtained by generating a model, since the endogenous nature of data interpolation for generating source data of the model is less expensive to be amplified and more focused on local semantics, it is desirable to further optimize the data expression capability of the text understanding image for global text semantics correspondingly thereto.
Thus, sparsity implicit limiting factor correction is performed on the text understanding image, expressed as:
Figure BDA0003947995250000121
m i,j is a characteristic value of the text understanding image, and
Figure BDA0003947995250000122
is the average of all feature values of the text understanding image.
FIG. 4 illustrates a flow chart of sparsity implicit limiting factor correction of the text understanding image to obtain an optimized text understanding image in a social platform data mining method based on teletext data collaboration in accordance with an embodiment of the application. As shown in fig. 4, performing sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image, comprising: s210, calculating a difference value between the text understanding image and the characteristic value of the text understanding image to obtain a first difference value; s220, calculating a difference value between the average value of all the characteristic values of the text understanding image and the average value of all the characteristic values of the text understanding image to obtain a second difference value; s230, calculating a logarithmic function value based on two after dividing the characteristic value of the text understanding image by the average value of all the characteristic values of the text understanding image to obtain a first logarithmic value; s240, calculating the first difference value divided by the second difference value, and then calculating a logarithmic function value based on two to obtain a second logarithmic value; and S250, calculating the sum value of the characteristic value of the text understanding image multiplied by the sum of the first pair of numerical values and the first difference value multiplied by the second pair of numerical values to obtain the characteristic value of the optimized text understanding image.
Here, the sparsity implicit limiting factor correction performs sparsity constraint on the implicit expression of the feature through a KL-like divergence form to sparsely limit the parameter space of the model, so as to improve the average liveness of the activation units of the model parameters, which infer the expected characteristics during training, thereby improving the group optimization (swarm optimization) capability of the model and improving the data expression capability of the text understanding image on global text semantics.
In step S170, the optimized text understanding image is combined with the image data in the social data of the user to obtain multi-channel image data. As described above, in the technical solution of the present application, considering that the image data and the text data in the data published by the social platform often have a correlation with each other, it is necessary to combine the text understanding image with the image data in the social data of the user to obtain multi-channel image data, that is, perform data-level correlation on the understanding image obtained based on text understanding and the image data in the social data of the user, and then obtain the classification feature map through a convolutional neural network model with excellent performance in the field of image feature extraction.
In step S180, the multi-channel image data is passed through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map. In an embodiment of the present application, the multi-channel image data is encoded using a first convolutional neural network to obtain a classification feature map. Because the multi-channel image is constructed in the application, in order to obtain the dependency relationship between the images of different channels, in the technical scheme of the application, the first convolutional neural network further integrates an efficient attention mechanism to obtain the implicit association characteristic between the images of different channels.
In order to enhance the expression capability of the obtained classification feature map, in the embodiment of the present application, a high-efficiency attention mechanism is integrated in the first convolutional neural network, so that in the process of encoding the multi-channel image data through the first convolutional neural network, the dependency relationship between images of different channels is more focused, so as to obtain the classification feature map with higher accuracy.
It should be understood that the efficient attention mechanism is an important method for improving the target detection performance, and the efficient attention mechanism solves the side effect of the traditional attention mechanism caused by dimension reduction for subsequent prediction, and aims to acquire the dependency relationship among channels and enhance the expression capability of the features. Wherein, high-efficient attention module is: after the feature map χ is input to carry out global average pooling of all channels without reducing the dimension, the high-efficiency attention module learns through a one-dimensional convolution which can be weight shared, and takes each channel and k neighbors thereof into consideration to capture cross-channel interaction in the learning process. k represents the kernel size of the one-dimensional convolution, and a formula (3) is obtained through the proportional relation between the coverage range of cross-channel information interaction (namely the kernel size k of the one-dimensional convolution) and the channel dimension C, wherein the value of k is determined in a self-adaptive mode, and gamma=2, b=1 and C are the channel dimension.
The formula is:
Figure BDA0003947995250000131
FIG. 5 illustrates a flow chart of a method of social platform data mining based on graph-text data collaboration, where the multi-channel image data is passed through a first convolutional neural network using an efficient attention mechanism to derive classification features, according to an embodiment of the present application. As shown in fig. 5, passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain classification features, comprising: s310, passing the multi-channel image data through a plurality of convolution layers of the first convolution neural network to output a high-dimensional feature map by a last convolution layer of the plurality of convolution layers; s320, carrying out global mean pooling on each feature matrix of the high-dimensional feature map to obtain a channel feature vector; s330, carrying out one-dimensional convolution encoding on the channel feature vectors to obtain inter-channel association feature vectors; s340, inputting the inter-channel associated feature vector into a Sigmoid activation function to obtain a probabilistic inter-channel associated feature vector; and S350, weighting each feature matrix of the high-dimensional feature map by taking the feature value of each position in the probability inter-channel associated feature vector as a weight to obtain the classification feature map.
It will be appreciated that each neuron node in the neural network accepts the output value of the neuron of the previous layer as the input value of the neuron of the present layer and passes the input value to the next layer, and that the input layer neuron node passes the input attribute value directly to the next layer (hidden layer or output layer). In a multi-layer neural network, there is a functional relationship between the output of an upper node and the input of a lower node, and this function is called an activation function (also called an excitation function). Sigmoid is a commonly used nonlinear activation function that maps ensemble real numbers onto (0, 1) intervals, which normalizes the data using a nonlinear method.
In particular, passing the multi-channel image data through a plurality of convolutional layers of the first convolutional neural network to output a high-dimensional feature map by a last convolutional layer of the plurality of convolutional layers, comprising: each convolution layer in the multi-layer convolution layers of the first convolution neural network respectively carries out convolution processing, pooling processing and nonlinear activation processing on input data in a forward transmission process so as to output the high-dimensional feature map by the last convolution layer in the multi-layer convolution layers.
In step S190, the classification feature map is passed through a multi-label classifier to obtain a classification result, where the classification result is an emotion category label to which the social data of the user belongs. For example, the emotion category labels include, but are not limited to: like, dislike, heart, noninductive, etc.
In an embodiment of the present application, the classifying feature map is processed by using the multi-label classifier according to the following formula to obtain the classifying result;
wherein, the formula is: softmax { (W) n ,B n ):…:(W 1 ,B 1 ) Project (F), where Project (F) represents projecting the classification feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias matrix for each fully connected layer.
In summary, a social platform data mining method based on graph-text data collaboration according to an embodiment of the present application is illustrated, which combines image data and text data in user social data to construct an emotion pattern recognition scheme based on social platform data mining. Specifically, firstly, crawling text data and image data in user social data published by a social platform through crawler software, then, performing feature extraction on the text data through a semantic encoder and a text understanding model to obtain a text understanding image, performing data-level association on the text understanding image and the image data, obtaining a classification feature map through a convolutional neural network model with excellent performance, and finally, passing the classification feature map through a multi-label classifier to obtain a classification result. By the method, the identification results of the text data and the image data can be accurately obtained, and the accuracy of emotion pattern identification based on social platform data mining is improved.
Exemplary System
FIG. 6 illustrates a block diagram of a social platform data mining system based on teletext data collaboration, according to an embodiment of the application.
As shown in fig. 6, a social platform data mining system 100 based on graph-text data collaboration according to an embodiment of the present application includes: a social data obtaining module 110, configured to obtain user social data, where the user social data includes image data and text data; the text data embedding module 120 is configured to divide words of text data in the social data of the user, and then convert each word in the text data into a word embedding vector through a word embedding layer to obtain a sequence of word embedding vectors; an interpolation module 130, configured to interpolate each word embedding vector in the sequence of word embedding vectors to obtain a sequence of word embedding enhancement vectors; the semantic coding module 140 is configured to insert the word into the sequence of enhancement vectors through a semantic encoder to obtain text semantic understanding feature vectors; a text understanding module 150, configured to pass the text semantic understanding feature vector through a text understanding model to obtain a text understanding image; a text understanding optimization module 160, configured to perform sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image; the multi-channel synthesis module 170 is configured to combine the optimized text understanding image with image data in the social data of the user to obtain multi-channel image data; a convolutional encoding module 180 for passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map; and an emotion tag generation module 190, configured to pass the classification feature map through a multi-tag classifier to obtain a classification result, where the classification result is an emotion category tag to which the user social data belongs.
In one example, in the social platform data mining system 100 based on the above-described teletext data collaboration, the semantic encoder is a context encoder based on a converter; the semantic coding module 140 includes: a context semantic coding unit, configured to perform global-based context semantic coding on the sequence of word embedded enhancement vectors using the context encoder based on the converter to obtain a plurality of semantic feature vectors; and the cascading unit is used for cascading the semantic feature vectors to obtain the text semantic understanding feature vector.
In one example, in the social platform data mining system 100 based on teletext data collaboration described above, the semantic encoder is a two-way long-short term memory neural network model.
In one example, in the social platform data mining system 100 based on the above-described graph-text collaboration, the text understanding optimization module 160 includes: a first difference calculating unit for calculating a difference between a feature value of the text understanding image and a feature value of the text understanding image to obtain a first difference; a second difference calculating unit for calculating a difference between a mean value of all feature values of the text understanding image and the one or more feature values to obtain a second difference; the first logarithmic value calculation unit is used for calculating a logarithmic function value based on two after dividing the characteristic value of the text understanding image by the average value of all the characteristic values of the text understanding image to obtain a first logarithmic value; the second logarithmic value calculation unit is used for calculating a logarithmic function value based on two after dividing the first difference value by the second difference value so as to obtain a second logarithmic value; and the addition value calculation unit is used for calculating the addition value of the sum of the characteristic value of the text understanding image multiplied by the first pair of numerical values and the first difference value multiplied by the second pair of numerical values to obtain the characteristic value of the optimized text understanding image.
In one example, in the social platform data mining system 100 based on the above-described teletext data collaboration, the text understanding optimization module 160 is further configured to perform sparse implicit limiting factor correction on the text understanding image to obtain the optimized text understanding image according to the following formula; the formula is:
Figure BDA0003947995250000161
wherein I F The Frobenius norm of the matrix is represented, M represents the two-dimensional matrix of the text understanding image, and M' represents the two-dimensional matrix of the optimized text understanding image.
In one example, in the social platform data mining system 100 based on the collaborative teletext, the convolutional encoding module 180 includes: a convolution layer unit configured to pass the multi-channel image data through multiple convolution layers of the first convolutional neural network to output a high-dimensional feature map by a last convolution layer of the multiple convolution layers; the pooling unit is used for carrying out global average pooling on each feature matrix of the high-dimensional feature map so as to obtain a channel feature vector; the one-dimensional convolution coding unit is used for carrying out one-dimensional convolution coding on the channel characteristic vectors to obtain inter-channel correlation characteristic vectors; an activating unit 340, configured to input the inter-channel associated feature vector into a Sigmoid activating function to obtain a probabilistic inter-channel associated feature vector; and the weighting unit is used for respectively weighting each characteristic matrix of the high-dimensional characteristic map by taking the characteristic value of each position in the probability inter-channel associated characteristic vector as a weight so as to obtain the classification characteristic map.
In one example, in the social platform data mining system 100 based on the graph-text data collaboration, the convolution layer unit includes: and the method is used for respectively carrying out convolution processing, pooling processing and nonlinear activation processing on input data in a forward transmission process on each convolution layer in the multi-layer convolution layers of the first convolution neural network so as to output the high-dimensional feature map by the last convolution layer in the multi-layer convolution layers.
In one example, in the social platform data mining system 100 based on the collaborative teletext, the emotion tag generation module 190 includes: processing the classification feature map using the multi-label classifier in the following formula to obtain the classification result; wherein, the formula is:
softmax{(W n ,B n ):…:(W 1 ,B 1 ) Project (F), where Project (F) represents projecting the classification feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias matrix for each fully connected layer.
Here, it will be appreciated by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described collaborative based social platform data mining system 100 have been described in detail in the above description of the collaborative based on teletext data mining method with reference to fig. 1 to 5, and thus, repetitive descriptions thereof will be omitted.
As described above, the social platform data mining system 100 based on the graph data collaboration according to the embodiment of the present application may be implemented in various terminal devices, for example, a server or the like for a social platform based on the graph data collaboration. In one example, the social platform data mining system 100 based on the collaborative teletext data according to an embodiment of the application may be integrated into a terminal device as one software module and/or hardware module. For example, the social platform data mining system 100 based on the collaborative teletext data may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the social platform data mining system 100 based on the collaborative teletext data may also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the social data mining system 100 and the terminal device may be separate devices, and the social data mining system 100 may be connected to the terminal device through a wired and/or wireless network, and transmit interaction information according to a agreed data format.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. The social platform data mining method based on graphic data collaboration is characterized by comprising the following steps of:
Acquiring user social data, wherein the user social data comprises image data and text data;
after word segmentation is carried out on text data in the social data of the user, each word in the text data is converted into a word embedding vector through a word embedding layer so as to obtain a sequence of the word embedding vector;
interpolation is carried out on each word embedding vector in the sequence of word embedding vectors so as to obtain a sequence of word embedding enhancement vectors;
the word embedding enhancement vector sequence passes through a semantic encoder to obtain a text semantic understanding feature vector;
the text semantic understanding feature vector passes through a text understanding model to obtain a text understanding image;
performing sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image;
combining the optimized text understanding image with the image data in the user social data to obtain multi-channel image data;
passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map; and
and the classification feature map passes through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belong.
2. The method for collaborative social platform data mining based on teletext data according to claim 1, wherein the semantic encoder is a context encoder based on a transducer;
wherein the embedding the sequence of words into the enhancement vector through the semantic encoder to obtain a text semantic understanding feature vector comprises:
performing global-based context semantic coding on the sequence of word-embedded enhancement vectors using the converter-based context encoder to obtain a plurality of semantic feature vectors; and
and cascading the plurality of semantic feature vectors to obtain the text semantic understanding feature vector.
3. The social platform data mining method based on graphic data collaboration according to claim 2, wherein the semantic encoder is a two-way long-short-term memory neural network model.
4. A social platform data mining method based on teletext data synergy according to claim 3, wherein said sparsity implicit limiting factor correction of the text understanding image to obtain an optimized text understanding image comprises:
calculating a difference value between the text understanding image and the characteristic value of the text understanding image to obtain a first difference value;
Calculating a difference value between the first value and the average value of all the characteristic values of the text understanding image to obtain a second difference value;
calculating a logarithmic function value based on two after dividing the characteristic value of the text understanding image by the average value of all the characteristic values of the text understanding image to obtain a first logarithmic value;
calculating the first difference value divided by the second difference value, and then calculating a logarithmic function value based on two to obtain a second logarithmic value; and
and calculating the sum value of the characteristic value of the text understanding image multiplied by the sum of the first logarithmic value and the first difference value multiplied by the second logarithmic value to obtain the characteristic value of the optimized text understanding image.
5. A social platform data mining method based on teletext data synergy according to claim 3, wherein said sparsity implicit limiting factor correction of the text understanding image to obtain an optimized text understanding image comprises:
performing sparsity implicit limiting factor correction on the text understanding image with the following formula to obtain the optimized text understanding image; the formula is:
Figure FDA0003947995240000021
wherein m is i,j Is the feature value of the text understanding image,
Figure FDA0003947995240000022
is the average of all feature values of the text understanding image, and m i,j Is the eigenvalue of the optimized text understanding image.
6. The method for collaborative social platform data mining based on teletext according to claim 5, wherein the passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map includes:
passing the multi-channel image data through multiple convolutional layers of the first convolutional neural network to output a high-dimensional feature map from a last convolutional layer of the multiple convolutional layers;
carrying out global mean pooling on each feature matrix of the high-dimensional feature map to obtain a channel feature vector;
carrying out one-dimensional convolution coding on the channel feature vectors to obtain inter-channel correlation feature vectors;
inputting the inter-channel associated feature vector into a Sigmoid activation function to obtain a probabilistic inter-channel associated feature vector; and
and respectively weighting each feature matrix of the high-dimensional feature map by taking the feature value of each position in the probability inter-channel associated feature vector as a weight to obtain the classification feature map.
7. The method of claim 6, wherein the passing the multi-channel image data through the multi-layer convolutional layers of the first convolutional neural network to output a high-dimensional feature map by a last convolutional layer of the multi-layer convolutional layers, comprises:
Each convolution layer in the multi-layer convolution layers of the first convolution neural network respectively carries out convolution processing, pooling processing and nonlinear activation processing on input data in a forward transmission process so as to output the high-dimensional feature map by the last convolution layer in the multi-layer convolution layers.
8. The method for social platform data mining based on graphic data collaboration according to claim 7, wherein the step of passing the classification feature map through a multi-tag classifier to obtain classification results comprises:
processing the classification feature map by using the multi-label classifier in the following formula to obtain the classification result;
wherein, the formula is: softmax { (W) n ,B n ):…:(W 1 ,B 1 ) Project (F), where Project (F) represents projecting the classification feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias matrix for each fully connected layer.
9. The social platform data mining system based on graphic data collaboration is characterized by comprising:
the social data acquisition module is used for acquiring user social data, wherein the user social data comprises image data and text data;
the text data embedding module is used for word segmentation of text data in the user social data and then converting each word in the text data into a word embedding vector through the word embedding layer so as to obtain a sequence of the word embedding vector;
The interpolation module is used for interpolating each word embedding vector in the sequence of word embedding vectors to obtain a sequence of word embedding enhancement vectors;
the semantic coding module is used for embedding the word into the sequence of the enhancement vector and obtaining a text semantic understanding feature vector through a semantic encoder;
the text understanding module is used for enabling the text semantic understanding feature vector to pass through a text understanding model to obtain a text understanding image;
the text understanding optimization module is used for carrying out sparsity implicit limiting factor correction on the text understanding image to obtain an optimized text understanding image;
the multi-channel synthesis module is used for merging the optimized text understanding image with the image data in the user social data to obtain multi-channel image data;
a convolutional encoding module for passing the multi-channel image data through a first convolutional neural network using an efficient attention mechanism to obtain a classification feature map; and
and the emotion label generation module is used for passing the classification feature map through a multi-label classifier to obtain a classification result, wherein the classification result is an emotion type label to which the user social data belong.
10. The collaborative social platform data mining system based on teletext data according to claim 9, wherein the text understanding optimization module is further configured to: performing sparsity implicit limiting factor correction on the text understanding image with the following formula to obtain the optimized text understanding image; the formula is:
Figure FDA0003947995240000041
Wherein m is i,j Is the feature value of the text understanding image,
Figure FDA0003947995240000042
is the average of all feature values of the text understanding image, and m i,j Is the eigenvalue of the optimized text understanding image. />
CN202211440379.9A 2022-11-17 2022-11-17 Social platform data mining method and system for graphic data collaboration Pending CN116030296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211440379.9A CN116030296A (en) 2022-11-17 2022-11-17 Social platform data mining method and system for graphic data collaboration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211440379.9A CN116030296A (en) 2022-11-17 2022-11-17 Social platform data mining method and system for graphic data collaboration

Publications (1)

Publication Number Publication Date
CN116030296A true CN116030296A (en) 2023-04-28

Family

ID=86076672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211440379.9A Pending CN116030296A (en) 2022-11-17 2022-11-17 Social platform data mining method and system for graphic data collaboration

Country Status (1)

Country Link
CN (1) CN116030296A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777572A (en) * 2023-08-25 2023-09-19 乐麦信息技术(杭州)有限公司 Electronic commerce transaction management system and method based on big data
CN117114627A (en) * 2023-10-18 2023-11-24 日照市自然资源和规划局 land resource management system
CN118035456A (en) * 2024-04-11 2024-05-14 江西微博科技有限公司 Electronic material data sharing management system based on big data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777572A (en) * 2023-08-25 2023-09-19 乐麦信息技术(杭州)有限公司 Electronic commerce transaction management system and method based on big data
CN117114627A (en) * 2023-10-18 2023-11-24 日照市自然资源和规划局 land resource management system
CN118035456A (en) * 2024-04-11 2024-05-14 江西微博科技有限公司 Electronic material data sharing management system based on big data

Similar Documents

Publication Publication Date Title
CN108875807B (en) Image description method based on multiple attention and multiple scales
CN109086658B (en) Sensor data generation method and system based on generation countermeasure network
CN116030296A (en) Social platform data mining method and system for graphic data collaboration
Zhi et al. Action unit analysis enhanced facial expression recognition by deep neural network evolution
CN112818861A (en) Emotion classification method and system based on multi-mode context semantic features
CN111738169A (en) Handwriting formula recognition method based on end-to-end network model
CN113159023A (en) Scene text recognition method based on explicit supervision mechanism
CN114973222B (en) Scene text recognition method based on explicit supervision attention mechanism
CN116095089B (en) Remote sensing satellite data processing method and system
CN111814453A (en) Fine-grained emotion analysis method based on BiLSTM-TextCNN
CN109766918A (en) Conspicuousness object detecting method based on the fusion of multi-level contextual information
CN111767697A (en) Text processing method and device, computer equipment and storage medium
CN116187349A (en) Visual question-answering method based on scene graph relation information enhancement
Long et al. Trainable subspaces for low rank tensor completion: Model and analysis
CN114780767A (en) Large-scale image retrieval method and system based on deep convolutional neural network
CN113240033B (en) Visual relation detection method and device based on scene graph high-order semantic structure
Li et al. Image decomposition with multilabel context: Algorithms and applications
CN111445545B (en) Text transfer mapping method and device, storage medium and electronic equipment
Peng et al. Recognizing micro-expression in video clip with adaptive key-frame mining
CN112560440A (en) Deep learning-based syntax dependence method for aspect-level emotion analysis
CN111339734A (en) Method for generating image based on text
Wei et al. Spatiotemporal features and local relationship learning for facial action unit intensity regression
CN116129251A (en) Intelligent manufacturing method and system for office desk and chair
CN113449517B (en) Entity relationship extraction method based on BERT gated multi-window attention network model
CN114782995A (en) Human interaction behavior detection method based on self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination