CN117331460A - Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis - Google Patents

Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis Download PDF

Info

Publication number
CN117331460A
CN117331460A CN202311259824.6A CN202311259824A CN117331460A CN 117331460 A CN117331460 A CN 117331460A CN 202311259824 A CN202311259824 A CN 202311259824A CN 117331460 A CN117331460 A CN 117331460A
Authority
CN
China
Prior art keywords
information
data
content
text
exhibition hall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311259824.6A
Other languages
Chinese (zh)
Inventor
刘曦
王磊
谢文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Northern Lights Digital Technology Co ltd
Original Assignee
Wuhan Northern Lights Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Northern Lights Digital Technology Co ltd filed Critical Wuhan Northern Lights Digital Technology Co ltd
Priority to CN202311259824.6A priority Critical patent/CN117331460A/en
Publication of CN117331460A publication Critical patent/CN117331460A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a digital exhibition hall content optimization method and device based on multidimensional interaction data analysis, and relates to the technical field of digital exhibition halls, wherein the method comprises the steps of obtaining first information and second information, wherein the first information comprises digital exhibition hall content data; carrying out data integration processing according to the first information and the second information, and carrying out data association on the integrated data to obtain third information; extracting text and image information according to the third information to obtain fourth information; carrying out emotion analysis processing according to the fourth information to obtain fifth information; performing model construction according to the second information and a preset deep learning mathematical model to obtain sixth information; and constructing a content optimization mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimization mathematical model to obtain seventh information. According to the invention, through emotion analysis, topic modeling, deep learning and other technologies, the digital display content is optimized, so that the satisfaction of audience is improved.

Description

Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis
Technical Field
The invention relates to the technical field of digital exhibition halls, in particular to a digital exhibition hall content optimization method and device based on multidimensional interaction data analysis.
Background
In the current digital age, the fields of culture, art and science and technology are integrated and developed rapidly, and a digital exhibition hall is taken as a key component part of the field, so that a rich and colorful experience and education opportunity are provided for audiences. The digital exhibition halls are widely applied to museums, scientific centers, exhibition halls and education institutions, and become key places for promoting cultural inheritance, knowledge popularization and audience interaction. However, although the digital exhibition hall plays an important role in propagating culture and knowledge, there are a series of important problems. Current digital showrooms typically employ static content presentation, which results in limitations in viewer interaction and deficiencies in personalized experience, where viewers often need to face pre-designed fixed content and cannot customize the experience according to personal interests and needs. Furthermore, the prior art has limited capabilities in data analysis and content optimization, failing to fully mine the interactive data of the viewer to provide a more attractive and personalized digital presentation.
Based on the defects of the prior art, a digital exhibition hall content optimization method and device based on multidimensional interaction data analysis are needed.
Disclosure of Invention
The invention aims to provide a digital exhibition hall content optimization method and device based on multidimensional interaction data analysis, so as to solve the problems. In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the present application provides a method for optimizing digital exhibition hall content based on multidimensional interactive data analysis, including:
acquiring first information and second information, wherein the first information comprises digital exhibition hall content data, and the second information comprises audience interaction data;
performing data integration processing according to the first information and the second information, and performing data association on the integrated data to obtain third information;
extracting text and image information according to the third information to obtain fourth information, wherein the fourth information comprises key content points;
carrying out emotion analysis processing according to the fourth information to obtain fifth information, wherein the fifth information comprises content points to be optimized;
performing model construction according to the second information and a preset deep learning mathematical model to obtain sixth information, wherein the sixth information comprises an interest mode and a behavior mode of a spectator;
and constructing a content optimization mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimization mathematical model to obtain seventh information, wherein the seventh information comprises optimized digital exhibition hall content.
In a second aspect, the present application further provides a digital exhibition hall content optimization device based on multidimensional interactive data analysis, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring first information and second information, the first information comprises digital exhibition hall content data, and the second information comprises audience interaction data;
the integration module is used for carrying out data integration processing according to the first information and the second information and carrying out data association on the integrated data to obtain third information;
the extraction module is used for extracting text and image information according to the third information to obtain fourth information, wherein the fourth information comprises key content points;
the analysis module is used for carrying out emotion analysis processing according to the fourth information to obtain fifth information, wherein the fifth information comprises content points needing to be optimized;
the construction module is used for carrying out model construction according to the second information and a preset deep learning mathematical model to obtain sixth information, wherein the sixth information comprises an interest mode and a behavior mode of a spectator;
and the optimizing module is used for constructing a content optimizing mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimizing mathematical model to obtain seventh information, wherein the seventh information comprises optimized digital exhibition hall content.
The beneficial effects of the invention are as follows:
the invention can accurately predict the behaviors and interests of audiences by using the deep learning model, is beneficial to better understand the demands of the audiences in the digital exhibition hall and provides personalized recommendation; through emotion analysis, topic modeling, deep learning and other technologies, the method can optimize the digital display content, thereby improving the satisfaction of audience.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a digital exhibition hall content optimization method based on multidimensional interactive data analysis according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a digital exhibition hall content optimizing device based on multidimensional interactive data analysis according to an embodiment of the present invention.
The marks in the figure: 1. an acquisition module; 2. an integration module; 21. a first integration unit; 22. a first digging unit; 23. a first building unit; 24. a first mapping unit; 3. an extraction module; 31. a first processing unit; 32. a first identification unit; 33. a first fusion unit; 34. second modeling; 4. an analysis module; 41. a first clustering unit; 42. a first analysis unit; 43. a first extraction unit; 44. a second fusion unit; 45. a first screening unit; 5. constructing a module; 51. a second extraction unit; 52. a second construction unit; 53. a first prediction unit; 54. a second mapping unit; 55. a second excavating unit; 6. an optimization module; 61. a second modeling unit; 62. a second processing unit; 63. a first optimizing unit.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1:
the embodiment provides a digital exhibition hall content optimization method based on multidimensional interaction data analysis.
Referring to fig. 1, the method is shown to include steps S100, S200, S300, S400, S500, and S600.
Step S100, acquiring first information and second information, wherein the first information comprises digital exhibition hall content data, and the second information comprises audience interaction data.
The digital exhibition hall content data comprises the content of digital exhibition, education or entertainment activities, and covers various forms of media such as text, images, video, sound and the like. The audience interaction data includes behavioral and feedback data of the audience in the digital display hall. These data record how the viewer interacted with the digitized exhibition's content, such as clicking, browsing, viewing, commenting, etc. The viewer interaction data also includes information about the viewer's personal information, interest tags, historical behavior, etc., which helps to better understand the viewer's needs and preferences.
And step 200, performing data integration processing according to the first information and the second information, and performing data association on the integrated data to obtain third information.
Wherein the digitized exhibition hall content data and the audience interaction data come from different sources and have different formats and structures. The task of data integration is to combine them into a unified data set for subsequent processing. Once the data integration is complete, the next task is to establish an association between the data, which may be accomplished by some association rules or model. Therefore, the data from two different sources can be integrated into one data set through data integration, so that the data processing flow is simplified, and the problem of data inconsistency is reduced. Meanwhile, through data association, connection between the audience and the digital exhibition hall content can be established, so that the interests and behaviors of the audience can be known, and a foundation is provided for subsequent content optimization and personalized recommendation.
It should be noted that step S200 includes step S210, step S220, step S230, and step S240.
And step S210, carrying out data integration and format standardization processing according to the first information to obtain the exhibition hall content data set.
In this step, the data may include various forms of text description, image, video, etc., which may exist in different formats, structures and naming schemes, and thus require integration. For example, an artwork may have a text description, author information, and related pictures, which may be stored in different database tables or files. In the data integration process, information needs to be integrated into one data set to ensure consistency and matching of fields. Meanwhile, different formats and standards may be used for data from different sources, such as date formats, units, naming conventions, etc. In the above steps, the data format standardization can be completed through operations of unifying the dates into a specific format, converting the measurement units into the same measurement units and the like, so that the consistency and usability of the data are ensured, meanwhile, mismatch and inconsistency among data of different sources are eliminated, and a clean and consistent data basis is provided for subsequent data analysis and mining.
And S220, carrying out association rule mining according to the second information, and obtaining an event sequence by considering the interaction relation, the tour time stamp interval and the personal attribute of the audience, wherein the event sequence comprises an interaction mode and an activity sequence.
This step mainly uses association rule mining techniques to analyze the interactive data of the audience, including their interactive behavior in the digital exhibition hall, time stamps (time records) and personal attributes. Association rule mining techniques aim to discover relationships and laws between different interactions to reveal common patterns of behavior of viewers when browsing digital exhibition halls, such as viewing related comments or sharing to social media after viewing a certain class of exhibits.
Thus, the mined association rules can be integrated into an event sequence by considering the individual attributes of the viewer and the time stamp interval. The sequence of events includes the order and pattern of the viewer's interactions in the digital display, helping us understand the behavior habits and interests of the viewer in the digital display.
And step S230, performing model construction processing according to the audience identifier, the exhibition hall content data set and the event sequence in the second information to obtain a data association model.
Wherein the viewer identifier is a unique identifier for each viewer, an identification associated with its mobile application or digitized booth account. The identifier is used to track the activity and personal information of the viewer.
Further, in order to construct the data correlation model, a bayesian network is introduced in this step for constructing the data correlation model. Bayesian networks represent the dependency between variables in a probabilistic manner, allowing reasoning in an uncertainty environment. While the interests and behavior of the audience typically have some uncertainty, bayesian networks can better handle this situation.
Thus, the data correlation model can be used to identify what exhibition content the audience interacted with, as well as the mode and frequency of interaction. This helps us to better understand the relationship between the audience and the exhibition content.
And step S240, performing association mapping processing according to the data association model to obtain third information, wherein the third information comprises audience personal information, audience interaction data and data-based exhibition hall content data associated with the audience personal information and the audience interaction data.
It will be appreciated that the data correlation model takes into account the multidimensional information of the viewer's attributes, behaviour, content preferences etc. and matches the viewer with their associated digitized exhibition content data of interest through complex relational network analysis. Through the association mapping process, effective contact between the audience and the digital exhibition hall content can be established, so that personal information and interaction behavior of the audience can be mapped to specific content, and therefore more accurate content recommendation, personalized experience and deeper understanding of audience requirements are achieved.
And step S300, extracting text and image information according to the third information to obtain fourth information, wherein the fourth information comprises key content points.
In this step, the text data is first analyzed by natural language processing technology, and the keywords and topic information are extracted therefrom, so as to facilitate understanding of the key concepts and topics in the exhibition hall content, and further provide information with higher relevance to the audience.
Second, using computer vision techniques, the image and video data is processed to identify key objects and scenes to enable the digital display to capture the viewer's visual interest in the display and display space. Finally, by combining the subjects and keywords in the text information with the objects and scenes in the image information, the content gist of the digital exhibition hall is more comprehensively reflected.
Further, the step S300 includes a step S310, a step S320, a step S330, and a step S340.
And step S310, performing natural language processing according to the text data in the third information, and obtaining text information through keyword extraction and topic analysis, wherein the text information comprises keywords and topic information.
It will be appreciated that, first, natural language processing techniques analyze and process the text data of a digitized exhibition hall, including the operations of word segmentation, part-of-speech tagging, syntactic analysis, etc. of the text, in order to convert the text data into a form that can be understood and processed by a computer.
Further, by analyzing factors such as word frequency, relevance, importance, etc. in the text, the system can automatically determine which words have significance in the text and are then regarded as keywords. The keywords described above are typically associated with the core concepts and topics of the text content, and thus may provide key insights about the text content. Meanwhile, by using the topic modeling technology, the system can identify the hidden topic structure in the text so as to deeply understand the text content and classify the text content into different topic categories, thereby improving the understanding and analysis capability of the text content, being beneficial to better meeting the interests and demands of audiences and improving the individuation degree and user experience of the digital exhibition hall.
Step S320, performing object detection processing according to the image and video data of the third information, and performing scene recognition processing on the detected object to obtain image-video information, wherein the image-video information comprises key objects and scene information.
It will be appreciated that object detection is intended to identify and locate a particular object or object in an image or video. Preferably, the process uses convolutional neural networks to extract key features of the image in order to effectively detect and tag objects in the digital display.
Where scene recognition refers to the recognition of the overall environment and scene in an image or video, such as the subject of an exhibition, place or atmosphere, etc., which can be achieved by analyzing the content and background of the image.
Furthermore, how to recognize scene information related to the digital exhibition hall content is learned through the computer vision model so as to better understand the environment where the audience is located, and the results of object detection and scene recognition are integrated so as to obtain image-video information, so that the understanding and analysis capability of the multimedia content are improved, the desire of the audience is better met, and the attraction and interactivity of the digital exhibition hall are improved.
And step S330, constructing a multidimensional data fusion model based on a preset multimodal neural network, and carrying out fusion processing on text information and image-video information according to the multidimensional data fusion model to obtain comprehensive representation.
In the multi-modal neural network, each data mode is processed through an independent neural network branch, and then features of different modes are fused to obtain richer information representation.
Further, the step of constructing the multi-dimensional data fusion model based on the preset multi-modal neural network comprises the following steps:
step S331, defining an improved network structure, wherein a recurrent neural network is used for capturing timing information and semantic associations in text for the text data. And taking the difference of different exhibits or scenes into consideration for the image-video data, introducing a multi-scale convolutional neural network to extract visual features.
The recurrent neural network can consider not only the meaning of each word, but also the sequence and the context relation between the words, so that semantic content and emotion information in text data can be better understood. Meanwhile, the multi-scale convolutional neural network can pay attention to the characteristics of different sizes at the same time, is beneficial to capturing various characteristics in image-video data, and accurately identifies visual differences of different exhibits or scenes.
In step S332, the attention mechanism is introduced into the fusion layer to adaptively adjust the weights of the text and the image-video features according to the interactive data and the personal attribute of the audience, so as to better reflect the interests of the audience.
In this embodiment, text and image-video information may be automatically given different importance according to the viewer's behavior and attributes through an attention mechanism.
Step S333, constructing a multi-mode loss function, and simultaneously considering the prediction tasks of texts and images-videos so as to learn the relation among various data more comprehensively and improve the overall performance of the model.
The purpose of the multi-modal loss function in this step is to improve the performance and generalization ability of the model by co-training multiple tasks. Specifically, the multi-modal loss function is expressed as:
L multimodal =α·L text +β·L image-video
wherein L is multimodal Is a multidimensional loss function; l (L) text A loss function for the text task; l (L) image-video A loss function for an image-video task; alpha and beta are weight coefficients for balancing two tasks, and can be adjusted according to actual requirements.
Because in practice certain keywords or topics may be more important, for text task L text In this embodiment, a treatment scheme for the class imbalance problem is introduced. Specifically, by introducing a weighted cross entropy loss function, the loss weights of different categories are adjusted according to the importance of the loss weights. Thus, for more critical keywords or topics, their classification errors will have a greater penalty weight to better optimize the content, further, the penalty function of the improved text task is as follows:
wherein L is text Is a loss function of text tasks. N is the number of samples. i is the index value of the sample. w (w) i The loss weight for the i-th sample.The actual label of the ith sample represents the actual class of the sample. />And outputting model prediction for the ith sample, wherein the model prediction output represents a class prediction result of the model for the sample.
In the image-video task, the structural similarity loss is introduced to better measure the similarity between images or videos, which is beneficial to capturing details and structural information of the image or video content and improving the model performance, and specifically, the loss function of the image-video task is as follows:
wherein L is image-video A loss function for an image-video task; lambda (lambda) 1 And lambda (lambda) 2 Weights for two loss terms for balancing the contributions of the mean square error loss and the structural similarity loss; n is the number of samples; i is the index value of the sample.An actual tag that is the ith sample; />Model prediction output for the i-th sample;is a structural similarity index used to measure similarity between images or videos.
Step S334, the learning rate and regularization parameters of the network are adjusted through cross-validation.
In deep learning, the selection of appropriate learning rate and regularization parameters is critical because it directly affects the performance and generalization ability of the model.
The learning rate determines the update step size of the model parameters in each iteration, while the regularization parameters are used to control the complexity of the model to prevent overfitting.
Preferably, this step performs network tuning through a cross-validation strategy of modality correlation modeling, which ensures that each cross-validation fold (fold) contains data from different modalities (text, images and video) through a K-fold cross-validation strategy. This helps to more fully evaluate interactions between the multimodal data to optimize the performance of the multimodal neural network.
Further, by using data of different modalities alternately for the validation set and the training set, the model can better understand and utilize information relationships between different modalities during training and validation. Meanwhile, the strategy can be also used for super-parameter tuning, so that the multi-modal neural network can obtain good performance under the condition of different modal data. Finally, the accuracy and efficiency of the digital exhibition hall content optimization method can be improved through the cross-validation strategy of the modal correlation modeling, so that the method is more suitable for complex application scenes of multi-modal data.
Step S335, using cross-validation technique to evaluate the performance of the multi-modal neural network, performing different training and evaluation according to the data of different exhibition halls and audience groups.
It can be understood that through multiple cross-validation experiments, performance indexes of the model under different audience and exhibition hall situations, such as accuracy, recall rate, F1 score and the like, can be obtained, so that the performance of the model under different data situations can be accurately known, and corresponding improvement and adjustment can be made, so that the digital exhibition hall content optimization method can achieve good effects under various application situations.
And step 340, modeling the theme in the comprehensive representation based on a preset implicit dirichlet distribution mathematical model, and obtaining fourth information by identifying and screening the potential theme structure in the comprehensive representation.
The implicit dirichlet allocation is a probabilistic model for extracting topic information from text data, but in this embodiment, the conventional implicit dirichlet allocation is improved, and a bi-directional association mechanism is established by introducing image-to-text association, so that the implicit dirichlet allocation is not only applicable to text, but also to multi-modal data in a comprehensive representation.
Therefore, after the implicit dirichlet distribution processing in the application, the text and the image can mutually influence, so that the topic information of the content can be better captured. For example, when the text describes a scene and content in a picture, the improved implicit dirichlet allocation mathematical model can simultaneously identify image features related to the text, thereby more fully understanding the subject matter of the content.
Therefore, through optimization, the application of the implicit dirichlet distribution mathematical model in the digital exhibition hall is more flexible, not only can the topic modeling of the text be processed, but also the data of multiple modes can be integrated into the topic modeling, thereby providing more comprehensive and more accurate topic information.
Further, the application selects the topics which are most relevant and meaningful to the digital exhibition hall content from all the extracted topics according to application requirements. The screening process described above may be based on different criteria such as the weight of the topic, relevance to the viewer's interests, etc. The resulting fourth information includes topic structures that are considered to be most relevant and valuable in the composite representation, and extraction of the topic structures facilitates a deeper understanding of the content characteristics of the digitized exhibition hall content, providing key information for subsequent content optimization and interest pattern construction.
And step S400, carrying out emotion analysis processing according to the fourth information to obtain fifth information, wherein the fifth information comprises content points needing to be optimized.
Emotion analysis is a natural language processing technique that can help us learn the emotion colors contained in text content, such as positive, negative, or neutral. In a digital exhibition hall, targeted improvements or optimizations can be made by analyzing emotions, e.g., excitement, satisfaction, depression, etc., exhibited by the viewer when interacting, reading, or viewing different content to identify what points of content are welcome by the viewer.
The step S400 includes a step S410, a step S420, a step S430, a step S440, and a step S450.
And step S410, performing K-means clustering processing according to the fourth information, grouping the content points in the fourth information into different clusters, and obtaining a clustering result by distributing the content points with similar characteristics into the nearest clusters.
It can be appreciated that the content points in the digital exhibition hall can be organized according to the similarity through K-means clustering, so that the content structure of the exhibition hall can be better understood.
And S420, performing text emotion analysis on the text content of each cluster in the clustering result to obtain a text emotion feedback result.
This step performs emotion analysis for the text content within each cluster by analyzing the vocabulary, grammar, and context in the text to determine emotion tendencies in the text, such as positive, negative, or neutral emotion. The text emotion feedback result is the emotion tendencies of the text content within each cluster. By text emotion analysis we can quantify the emotion characteristics of the text content within each cluster.
And S430, extracting image emotion characteristics of the image content of each cluster in the clustering result based on a preset convolutional neural network mathematical model to obtain an image emotion characteristic result.
In this step, emotion-related features, including color intensity, composition, emotion expression, etc., may be extracted from the images in each cluster via a convolutional neural network to facilitate understanding of emotion or emotion elements conveyed by the image content, such as emotional states of happiness, sadness, anger, etc., as a basis for subsequent comprehensive analysis. Further, through the extracted image emotion characteristics, emotion attributes of image contents in each cluster can be further known, and further, the digital exhibition hall manager is facilitated to better understand emotion connection between audiences and the image contents.
And S440, carrying out fusion processing on the text emotion feedback result and the image emotion characteristic result to obtain a comprehensive feedback result, and carrying out weighted average calculation on the comprehensive feedback result in each cluster to obtain the comprehensive emotion score of each cluster.
The emotion information of different modes is converted into a common representation form so as to be fused.
Preferably, this step is accomplished by mapping text emotion and image emotion to a shared emotion space. The fused result is comprehensive emotion feedback of the content points in each cluster, and the comprehensive emotion feedback can comprehensively consider emotion information conveyed by texts and images and reflect comprehensive emotion experience of audiences on the cluster content.
Specifically, in order to obtain the final comprehensive emotion score of each cluster, the embodiment adopts a weighted average method, and weight distribution is performed according to the importance of different clusters, and also can be adjusted according to personalized emotion preference of the audience. Through weighted average, emotion scores of each cluster can be obtained to reflect the overall emotion experience of the audience to different contents in the digital exhibition hall.
Further, mapping text emotion and image emotion to a shared emotion space is implemented by:
and step S441, performing time sequence capturing processing and semantic association according to a text emotion feedback result to obtain text emotion representation.
The timing capture process helps to capture the time correlation in the text, while the semantic correlation helps to understand the semantic relationship between different elements in the text, so that the resulting emotional representation of the text will be more informative and expressive.
And step S442, performing image feature extraction processing according to the image emotion feature result to obtain image emotion representation.
In this step, the image feature extraction process may be performed using a convolutional neural network, which aims to capture emotion-related features in the image, and further transform the image emotion information into an operable representation form through the emotion-related features.
And S443, mapping the text emotion representation and the image emotion representation into a shared emotion space based on a preset emotion alignment network, and aligning the text mapping result and the image mapping result to obtain the shared emotion representation.
Emotion alignment networks typically employ a Siamese network architecture, which includes two branches: one for processing the text emotion representation and the other for processing the image emotion representation.
The two branches map text and image emotion to a shared emotion space, respectively, to share the same emotion representation space. Further, the goal of the emotion alignment network is to minimize the distance between the text emotion representations and the image emotion representations to ensure that they are consistent in the shared emotion space, thereby achieving efficient alignment of text and image emotion.
And step 444, performing element-by-element multiplication calculation according to the shared emotion representation to obtain comprehensive emotion feedback.
The result of the element-by-element multiplication calculation reflects the comprehensive emotion feedback of the content points in each cluster, and the combined action of text and image emotion is reflected, so that the emotion experience of the audience on the cluster content is more comprehensively described.
And S450, screening according to the comprehensive emotion score and a preset threshold value to obtain fifth information.
It can be appreciated that this step screens out content points with specific emotional tendencies based on the comprehensive emotional feedback of the audience in combination with a preset threshold for further optimization or adjustment in the digital exhibition hall. Specifically, according to the relation between the comprehensive emotion score and the threshold value, the content points are divided into different emotion tendency categories, such as positive, neutral, negative and the like, so that emotion feedback of audience to different contents is better understood, and guidance is provided for improvement of exhibition.
Step S500, analyzing the second information through a preset deep learning mathematical model to obtain sixth information, wherein the sixth information comprises interest patterns and behavior patterns of audiences, the interest patterns are patterns of the audiences on specific interest fields or topics in the digital exhibition hall, and the behavior patterns are patterns of interaction and participation modes of the audiences in the digital exhibition hall.
The interest pattern expresses the interest degree of the audience on specific content or topics in the digital exhibition hall, and the interest degree can be quantified as the interaction frequency, the watching duration, the number of interaction comments and the like of the audience and related content. For example, the more often a viewer interacts with a certain content, or the longer the stay on a certain content, the higher the interest in that content may be considered. Thereby quantifying the recording and analysis of the digital display hall through the interactive behavior data.
In addition, the behavior patterns include the manner and habit of the viewer's behavior in the digital exhibition hall. The behavior pattern may be quantified as the access frequency of the viewer, access time, number of views of a particular content, etc. For example, a viewer may visit a digital exhibition hall weekly, which may be quantified as a pattern of behavior that is frequently visited.
Further, this step employs a deep learning model, such as a neural network, to process the viewer's interaction data, including their behavior, preferences, and interests in the digital display, and the analysis process considers a variety of factors, such as the viewer's interaction pattern, tour time stamp intervals, and personal attributes, to more fully understand their behavior and interests in the digital display.
Further, the step S500 includes a step S510, a step S520, a step S530, a step S540, and a step S550.
And S510, performing feature engineering processing according to the audience behavior data and the personal information in the second information, and obtaining key feature data by extracting the interaction frequency information, the interaction duration information and the access time.
This step may convert the viewer's behavioral data and personal information into a feature set that may be used by the deep learning model. The interaction frequency information may be the calculated interaction frequency of the audience and different contents in the digital exhibition hall according to the behavior data of the audience, and is used for reflecting the attention degree of the audience to specific exhibits or information.
The interaction duration information is a time interval between a start time and an end time of the interaction when the audience interacts each time, and is used for reflecting the depth interaction degree of the audience and the content.
Thus, by analyzing the time of the viewer's visit in the digital display, including the specific time period, point of time of the visit, etc., one can gain insight into the viewer's behavioral patterns and activity time preferences. Meanwhile, the personal information of the audience, such as age, sex, hobbies and interests, can be coded so that the model can understand the influence of the personal information on the behavior of the audience.
And step S520, performing model construction and optimization processing based on a preset deep learning mathematical model and key feature data to obtain an audience interest-behavior prediction model.
The prediction model can be used for predicting information such as interest points, behavior tracks, interaction habits and the like of audiences in the digital exhibition hall, so that a manager can conveniently manage the exhibition hall and optimize the content of the exhibition hall, and more personalized digital exhibition experience can be provided.
And step S530, predicting the audience behavior data and the personal information according to the audience interest-behavior prediction model to obtain a prediction result, wherein the prediction result comprises an interest prediction result and a behavior prediction result.
It will be appreciated that this step may analyze the viewer's personal information and historical behavioral data via a viewer interest-behavior prediction model to predict the viewer's interests, including predicted information of viewer interest preferences and preferences, such as which digital exhibits or content the viewer may be interested in, which topics or fields may draw their attention, etc., thereby helping the digital exhibition to better provide content for the viewer's personalized needs.
Second, the model also lends itself to predicting the audience's behavior, including actions that the audience may take, such as which exhibits they may browse, watch time, interact with, etc., to provide information about the audience's behavior habits and trends in the digital display hall, helping the display manager to better understand the audience's behavior patterns.
Therefore, the exhibition hall manager can optimize content display, provide personalized suggestions, improve interaction experience and the like according to the prediction result so as to meet the requirements of audiences and improve the attractiveness and benefit of the digital exhibition hall.
Step S540, using multi-layer perceptrons according to the interest prediction result, and obtaining the interest pattern by mapping the audience behavior and interaction data to different interest categories.
The multi-layer perceptron is a deep learning model, and is composed of a plurality of layers of neurons, so that the complex nonlinear relation of data can be learned. In the present invention, the multi-layered perceptron accepts as input audience behavior and interaction data, including audience click records, viewing time, interaction frequency, etc. The multi-layer perceptron then maps the input data to different interest categories through the computation of the multi-layer neurons.
Therefore, the model can learn the relation between the behaviors of the audience and the interests of the audience through training a large amount of audience data and corresponding interest prediction results, so that the accurate prediction of the interests of the audience is realized. Further, the construction of the interest pattern is helpful for the digital exhibition hall to better know the personalized interests of the audience, thereby providing more relevant and attractive content and improving the participation and satisfaction of the audience.
And step S550, carrying out association rule mining processing according to the behavior prediction result to obtain a behavior mode.
It will be appreciated that this step can help the digital display hall to better understand the behavioral trends and preferences of the audience, thereby better meeting the needs of the audience and providing a personalized interactive experience.
Preferably, the FP-growth algorithm is used in this embodiment to perform association rule mining, and first, the viewer's behavior data is sorted into a form suitable for algorithm processing, including encoding the data into the form of transactions or item sets, in order to identify frequent patterns. Further, the FP-growth algorithm constructs an FP tree according to the sorted data, wherein nodes of the FP tree represent frequent items, and a link structure of the tree is used for connecting similar item sets.
Specifically, the FP-tree construction process includes scanning the dataset, identifying frequent items, and constructing a tree structure. After the FP-tree is built, frequent patterns may be mined by traversing the tree.
The frequent pattern is a pattern that frequently occurs in the behavior of the viewer, and is used to reveal the behavior tendency and relevance of the viewer.
Finally, association rules are generated through the frequent patterns mined to describe relationships between audience behaviors and provide to digital display manager, e.g., if an audience browses a certain class of exhibits, it is more likely to select certain activities in the next interactions. Thus, the behavior patterns of the audience can be more deeply understood through the FP-growth algorithm, thereby providing a more personalized digital exhibition experience.
And S600, constructing a content optimization mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimization mathematical model to obtain seventh information, wherein the seventh information comprises optimized digital exhibition hall content.
The method can automatically adjust the content of the digital exhibition hall according to the feedback and emotion information of the audience, so as to provide the exhibition experience which accords with the interests and demands of the audience, and is beneficial to improving the satisfaction degree of the audience of the exhibition hall, enhancing the participation degree of the audience on the digital exhibition hall, and improving the overall benefit of the exhibition hall.
It should be noted that, the step S600 includes a step S610, a step S620, and a step S630.
And step S610, performing model construction and parameter adjustment processing according to the sixth information and a preset cyclic neural network mathematical model to obtain a content optimization mathematical model.
In this step, the interest pattern and the behavior pattern of the viewer are taken as inputs, and the time-series modeling characteristics of the recurrent neural network are utilized to process the interest pattern and the behavior pattern of the viewer.
The model construction process comprises the following steps: firstly, defining input, wherein the input comprises data related to historical interaction of audiences, such as time stamps, personal attributes of the audiences and the like;
The parameters of the model are then initialized, which can be achieved by a random initialization method:
the parameters are then gradually adjusted by back propagation and optimization algorithms to fit the data. In the training process, historical audience data can be used, and a cyclic neural network model is trained in a supervised learning mode, so that interests and behaviors of future audiences can be predicted,
finally, the performance of the model on the test data is repeatedly verified through parameter adjustment (such as adjusting the super parameters of the model, such as the number of layers, the size of a hidden layer, the learning rate, regularization and the like), so that the model is ensured to perform well in practical application.
And step S620, combining the fifth information with the content data of the digital exhibition hall according to the content optimization mathematical model, and obtaining a personalized recommendation result through collaborative filtering processing.
Thus, the step first integrates the content points in the fifth information with the content data of the digital exhibition hall to establish a comprehensive content library. Then, by utilizing collaborative filtering technology, the content which is possibly liked by the audience is estimated according to the historical behaviors and interests of the audience.
Among them, collaborative filtering is generally classified into two types: collaborative user-based filtering and collaborative item-based filtering. User-based collaborative filtering may consider similarities between viewers and other viewers, while item-based collaborative filtering may consider similarities between content points. The similarity measure may be calculated based on historical behavioral data of the viewer and characteristics of the content points.
Further, personalized recommendation results are generated according to the collaborative filtering results, and further optimization is performed on the recommendation results through an optimization strategy, including filtering out content unsuitable for audiences, providing ordering of recommendation content and the like, so that the audiences can see the most relevant and attractive content first.
Step S630, generating a mathematical model by using preset contents according to the combination of the personalized recommendation result and the original content data of the digital exhibition hall, and obtaining seventh information.
Wherein combining the personalized recommendation with the original content data of the digital display includes matching the personalized interests of the viewer with content points in the digital display. The model may generate new content suggestions or modify the original content based on various factors, such as the interests of the viewer, the characteristics of the content points, historical viewer behavior, etc.
Wherein the seventh information generated includes newly added content, modified content or other information related to the digital exhibition hall to reflect the interests and the demands of the audience, and further through personalized optimization processing, the matching with the demands and the interests of the audience is ensured.
Example 2:
as shown in fig. 2, the present embodiment provides a digital exhibition hall content optimizing device based on multidimensional interactive data analysis, the device includes:
An acquisition module 1 for acquiring first information and second information, the first information comprising digitized exhibition hall content data, the second information comprising audience interaction data.
And the integration module 2 is used for carrying out data integration processing according to the first information and the second information and carrying out data association on the integrated data to obtain third information.
And the extraction module 3 is used for extracting text and image information according to the third information to obtain fourth information, wherein the fourth information comprises key content points.
And the analysis module 4 is used for carrying out emotion analysis processing according to the fourth information to obtain fifth information, wherein the fifth information comprises content points needing to be optimized.
The construction module 5 analyzes the second information through a preset deep learning mathematical model to obtain sixth information, wherein the sixth information comprises interest patterns and behavior patterns of audiences, the interest patterns are patterns of the audiences on specific interest fields or topics in the digital exhibition hall, and the behavior patterns are patterns of interaction and participation modes of the audiences in the digital exhibition hall.
And the optimizing module 6 is used for constructing a content optimizing mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimizing mathematical model to obtain seventh information, wherein the seventh information comprises optimized digital exhibition hall content.
In one embodiment of the present disclosure, the integration module 2 includes:
a first integrating unit 21 for performing data integration and format normalization processing according to the first information to obtain a exhibition hall content data set.
The first mining unit 22 performs association rule mining based on the second information, and obtains an event sequence including an interaction pattern and an activity sequence by considering the interaction relationship, the visit time stamp interval, and the personal attribute of the audience.
A first construction unit 23, configured to perform model construction processing according to the audience identifier, the exhibition hall content data set and the event sequence in the second information to obtain a data association model.
The first mapping unit 24 is configured to perform association mapping processing according to the data association model to obtain third information, where the third information includes personal information of the audience, interaction data of the audience, and data-based exhibition hall content data associated therewith.
In one embodiment of the present disclosure, the extraction module 3 includes:
the first processing unit 31 is configured to perform natural language processing according to the text data in the third information, and obtain text information through keyword extraction and topic analysis, where the text information includes keywords and topic information.
The first identifying unit 32 is configured to perform object detection processing according to the image and video data of the third information, and perform scene recognition processing on the detected object to obtain image-video information, where the image-video information includes a key object and scene information.
The first fusion unit 33 is configured to obtain a multidimensional data fusion model based on a preset multimodal neural network, and perform fusion processing on text information and image-video information according to the multidimensional data fusion model to obtain a comprehensive representation.
The first modeling unit models the theme in the comprehensive representation based on a preset implicit dirichlet distribution mathematical model, and obtains fourth information by identifying and screening potential theme structures in the comprehensive representation.
In one embodiment of the present disclosure, the analysis module 4 includes:
the first clustering unit 41 is configured to perform K-means clustering according to the fourth information, group content points in the fourth information into different clusters, and obtain a clustering result by assigning content points with similar features to the nearest clusters.
The first analysis unit 42 is configured to perform text emotion analysis on the text content of each cluster in the clustering result to obtain a text emotion feedback result.
The first extraction unit 43 performs image emotion feature extraction on the image content of each cluster in the clustering result based on a preset convolutional neural network mathematical model to obtain an image emotion feature result.
And the second fusion unit 44 is configured to fuse the text emotion feedback result and the image emotion feature result to obtain a comprehensive feedback result, and calculate a weighted average value of the comprehensive feedback results in each cluster to obtain a comprehensive emotion score of each cluster.
The first filtering unit 45 is configured to filter and obtain fifth information according to the comprehensive emotion score and a preset threshold.
In one embodiment of the present disclosure, the build module 5 includes:
the second extracting unit 51 is configured to perform feature engineering processing according to the viewer behavior data and the personal information in the second information, and obtain key feature data by extracting the interaction frequency information, the interaction duration information, and the access time.
The second construction unit 52 performs model construction and optimization processing based on a preset deep learning mathematical model and key feature data to obtain a viewer interest-behavior prediction model.
The first prediction unit 53 is configured to predict the viewer behavior data and the personal information according to the viewer interest-behavior prediction model to obtain prediction results, where the prediction results include an interest prediction result and a behavior prediction result.
The second mapping unit 54 is configured to obtain the interest pattern by mapping the behavior and interaction data of the audience to different interest categories using the multi-layered perceptron according to the interest prediction result.
And the second mining unit 55 is used for performing association rule mining processing according to the behavior prediction result to obtain a behavior pattern.
In one embodiment of the present disclosure, the optimization module 6 includes:
and the second modeling unit 61 is configured to perform model construction and parameter adjustment processing according to the sixth information and a preset cyclic neural network mathematical model to obtain a content optimization mathematical model.
The second processing unit 62 is configured to combine the fifth information with the content data of the digital exhibition hall according to the content optimization mathematical model, and obtain the personalized recommendation result through collaborative filtering processing.
The first optimizing unit 63 is configured to combine the personalized recommendation result and the original content data of the digital exhibition hall, and generate a mathematical model by using the preset content to obtain seventh information.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The digital exhibition hall content optimization method based on multidimensional interaction data analysis is characterized by comprising the following steps of:
acquiring first information and second information, wherein the first information comprises digital exhibition hall content data, and the second information comprises audience interaction data;
performing data integration processing according to the first information and the second information, and performing data association on the integrated data to obtain third information;
extracting text and image information according to the third information to obtain fourth information, wherein the fourth information comprises key content points;
carrying out emotion analysis processing according to the fourth information to obtain fifth information, wherein the fifth information comprises content points to be optimized;
analyzing the second information through a preset deep learning mathematical model to obtain sixth information, wherein the sixth information comprises interest modes and behavior modes of audiences, the interest modes are modes of the audiences on specific interest fields or topics in the digital exhibition hall, and the behavior modes are modes of interaction and participation modes of the audiences in the digital exhibition hall;
And constructing a content optimization mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimization mathematical model to obtain seventh information, wherein the seventh information comprises optimized digital exhibition hall content.
2. The method of optimizing digital exhibition hall content according to claim 1, wherein performing data integration processing according to the first information and the second information, and performing data association on the integrated data to obtain third information, comprises:
performing data integration and format standardization processing according to the first information to obtain a exhibition hall content data set;
performing association rule mining according to the second information, and obtaining an event sequence by considering the interaction relation, the tour time stamp interval and the personal attribute of the audience, wherein the event sequence comprises an interaction mode and an activity sequence;
performing model construction processing according to the audience identifier, the exhibition hall content data set and the event sequence in the second information to obtain a data association model;
and carrying out association mapping processing according to the data association model to obtain third information, wherein the third information comprises audience personal information, audience interaction data and data-based exhibition hall content data associated with the audience personal information.
3. The digital exhibition hall content optimizing method according to claim 1, wherein the text and image information extraction processing according to the third information to obtain fourth information, comprises:
performing natural language processing according to the text data in the third information, and extracting keywords and analyzing the topics to obtain text information, wherein the text information comprises keywords and topic information;
performing object detection processing according to the image and video data of the third information, and performing scene recognition processing on the detected object to obtain image-video information, wherein the image-video information comprises key objects and scene information;
constructing a multi-dimensional data fusion model based on a preset multi-modal neural network, and carrying out fusion processing on the text information and the image-video information according to the multi-dimensional data fusion model to obtain comprehensive representation;
modeling a theme in the comprehensive representation based on a preset implicit dirichlet allocation mathematical model, and obtaining fourth information by identifying and screening potential theme structures in the comprehensive representation.
4. The method for optimizing digital exhibition hall contents according to claim 3, wherein the constructing a multidimensional data fusion model based on a preset multi-modal neural network comprises:
Defining an improved network structure, wherein a cyclic neural network is adopted for capturing time sequence information and semantic association in a text for text data, and a multi-scale convolutional neural network is introduced for extracting visual characteristics for image-video data in consideration of the differences of different exhibits or scenes;
introducing an attention mechanism in the fusion layer to adaptively adjust weights of text and image-video characteristics according to interaction data and personal attributes of audiences;
constructing a multi-mode loss function, and simultaneously considering the prediction tasks of texts and images-videos so as to learn the relation among various data more comprehensively;
the learning rate and regularization parameters of the network are adjusted through cross verification;
the performance of the multi-modal neural network is evaluated using cross-validation techniques, with different training and evaluation based on data from different exhibition halls and audience groups.
5. The digital exhibition hall content optimization method according to claim 4, wherein constructing the multi-modal loss function while considering the text and image-video prediction tasks comprises:
introducing a weighted cross entropy loss function for the text task, and adjusting the loss weights of different categories according to the importance of the loss functions, wherein the loss function of the text task is as follows:
Wherein L is text A loss function for the text task; n is the number of samples; i is the index value of the sample; w (w) i Loss weight for the i-th sample;the actual label of the ith sample represents the actual category of the sample; />And outputting model prediction for the ith sample, wherein the model prediction output represents a class prediction result of the model for the sample.
6. The digital exhibition hall content optimization method according to claim 4, wherein constructing the multi-modal loss function while considering the text and image-video prediction tasks comprises:
for image-video tasks, a structural similarity penalty is introduced to measure similarity between image-videos, where the penalty function of the image-video task is:
wherein L is image-video A loss function for an image-video task; lambda (lambda) 1 And lambda (lambda) 2 Weights for two loss terms for balancing the contributions of the mean square error loss and the structural similarity loss; n is the number of samples; i is the index value of the sample;an actual tag that is the ith sample; />Model prediction output for the i-th sample;is a structural similarity index used to measure similarity between images or videos.
7. The method of optimizing digital exhibition hall content according to claim 1, wherein performing emotion analysis processing according to the fourth information to obtain fifth information comprises:
Performing K-means clustering processing according to the fourth information, grouping content points in the fourth information into different clusters, and obtaining a clustering result by distributing content points with similar characteristics into the nearest clusters;
carrying out text emotion analysis on the text content of each cluster in the clustering results to obtain text emotion feedback results;
extracting image emotion characteristics from the image content of each cluster in the clustering result based on a preset convolutional neural network mathematical model to obtain an image emotion characteristic result;
the text emotion feedback result and the image emotion characteristic result are fused to obtain a comprehensive feedback result, and a weighted average value of the comprehensive feedback results in each cluster is calculated to obtain a comprehensive emotion score of each cluster;
and screening according to the comprehensive emotion score and a preset threshold value to obtain fifth information.
8. The digital exhibition hall content optimization method according to claim 1, wherein analyzing the second information by a preset deep learning mathematical model to obtain sixth information comprises:
performing feature engineering processing according to the audience behavior data and the personal information in the second information, and obtaining key feature data by extracting interaction frequency information, interaction duration information and access time;
Performing model construction and optimization processing based on a preset deep learning mathematical model and the key feature data to obtain an audience interest-behavior prediction model;
predicting audience behavior data and personal information according to the audience interest-behavior prediction model to obtain prediction results, wherein the prediction results comprise interest prediction results and behavior prediction results;
using a multi-layer perceptron according to the interest prediction result, and obtaining an interest mode by mapping the behavior and interaction data of the audience to different interest categories;
and carrying out association rule mining processing according to the behavior prediction result to obtain a behavior mode.
9. The digital exhibition hall content optimization method according to claim 1, wherein constructing a content optimization mathematical model based on the sixth information, and obtaining seventh information by using the fifth information as an input value of the content optimization mathematical model, comprises:
performing model construction and parameter adjustment processing according to the sixth information and a preset cyclic neural network mathematical model to obtain a content optimization mathematical model;
combining the fifth information with the content data of the digital exhibition hall according to the content optimization mathematical model, and obtaining a personalized recommendation result through collaborative filtering processing;
And generating a mathematical model by using preset contents according to the combination of the personalized recommendation result and the original content data of the digital exhibition hall, so as to obtain seventh information.
10. A digital exhibition hall content optimizing device based on multidimensional interactive data analysis, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring first information and second information, the first information comprises digital exhibition hall content data, and the second information comprises audience interaction data;
the integration module is used for carrying out data integration processing according to the first information and the second information and carrying out data association on the integrated data to obtain third information;
the extraction module is used for extracting text and image information according to the third information to obtain fourth information, wherein the fourth information comprises key content points;
the analysis module is used for carrying out emotion analysis processing according to the fourth information to obtain fifth information, wherein the fifth information comprises content points needing to be optimized;
the construction module is used for carrying out model construction according to the second information and a preset deep learning mathematical model to obtain sixth information, wherein the sixth information comprises an interest mode and a behavior mode of a spectator;
And the optimizing module is used for constructing a content optimizing mathematical model based on the sixth information, and taking the fifth information as an input value of the content optimizing mathematical model to obtain seventh information, wherein the seventh information comprises optimized digital exhibition hall content.
CN202311259824.6A 2023-09-26 2023-09-26 Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis Pending CN117331460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311259824.6A CN117331460A (en) 2023-09-26 2023-09-26 Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311259824.6A CN117331460A (en) 2023-09-26 2023-09-26 Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis

Publications (1)

Publication Number Publication Date
CN117331460A true CN117331460A (en) 2024-01-02

Family

ID=89292458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311259824.6A Pending CN117331460A (en) 2023-09-26 2023-09-26 Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis

Country Status (1)

Country Link
CN (1) CN117331460A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117764372A (en) * 2024-02-20 2024-03-26 山东铁路投资控股集团有限公司 Method and system for dynamically designing and optimizing business form flow

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190337157A1 (en) * 2016-12-31 2019-11-07 Huawei Technologies Co., Ltd. Robot, server, and human-machine interaction method
CN115577161A (en) * 2022-10-14 2023-01-06 徐州达希能源技术有限公司 Multi-mode emotion analysis model fusing emotion resources
CN115730608A (en) * 2022-11-29 2023-03-03 华中师范大学 Learner online communication information analysis method and system
CN116629925A (en) * 2023-04-23 2023-08-22 北京国联视讯信息技术股份有限公司 Digital exhibition method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190337157A1 (en) * 2016-12-31 2019-11-07 Huawei Technologies Co., Ltd. Robot, server, and human-machine interaction method
CN115577161A (en) * 2022-10-14 2023-01-06 徐州达希能源技术有限公司 Multi-mode emotion analysis model fusing emotion resources
CN115730608A (en) * 2022-11-29 2023-03-03 华中师范大学 Learner online communication information analysis method and system
CN116629925A (en) * 2023-04-23 2023-08-22 北京国联视讯信息技术股份有限公司 Digital exhibition method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117764372A (en) * 2024-02-20 2024-03-26 山东铁路投资控股集团有限公司 Method and system for dynamically designing and optimizing business form flow

Similar Documents

Publication Publication Date Title
CN111444428B (en) Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
Farnadi et al. User profiling through deep multimodal fusion
Gaw Algorithmic logics and the construction of cultural taste of the Netflix Recommender System
Xu et al. Hierarchical affective content analysis in arousal and valence dimensions
JP2009514075A (en) How to provide users with selected content items
JPWO2007043679A1 (en) Information processing apparatus and program
CN113395578B (en) Method, device, equipment and storage medium for extracting video theme text
Somandepalli et al. Computational media intelligence: Human-centered machine analysis of media
Maybury Multimedia information extraction: Advances in video, audio, and imagery analysis for search, data mining, surveillance and authoring
CN117331460A (en) Digital exhibition hall content optimization method and device based on multidimensional interaction data analysis
US20240078278A1 (en) System and method for topological representation of commentary
JP5367872B2 (en) How to provide users with selected content items
Wong et al. Compute to tell the tale: Goal-driven narrative generation
Wu et al. Toward predicting active participants in tweet streams: A case study on two civil rights events
Li et al. Are users attracted by playlist titles and covers? Understanding playlist selection behavior on a music streaming platform
CN116010696A (en) News recommendation method, system and medium integrating knowledge graph and long-term interest of user
Zemaityte et al. Quantifying the global film festival circuit: Networks, diversity, and public value creation
Quadrana Algorithms for sequence-aware recommender systems
Chang et al. Report of 2017 NSF workshop on multimedia challenges, opportunities and research roadmaps
Casillo et al. The Role of AI in Improving Interaction With Cultural Heritage: An Overview
Dal Mas Layered ontological image for intelligent interaction to extend user capabilities on multimedia systems in a folksonomy driven environment
Dale The CogMedia Project: Open data and tools for linking cognitive science and mass media
CN117435752B (en) Information collection and analysis method and system based on big data
Toppano et al. Semiotic annotation of narrative video commercials: bridging the gap between artifacts and ontologies
Bürger A model of relevance for reuse-driven media retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination