CN115544239A - Deep learning model-based layout preference prediction method - Google Patents

Deep learning model-based layout preference prediction method Download PDF

Info

Publication number
CN115544239A
CN115544239A CN202211212657.5A CN202211212657A CN115544239A CN 115544239 A CN115544239 A CN 115544239A CN 202211212657 A CN202211212657 A CN 202211212657A CN 115544239 A CN115544239 A CN 115544239A
Authority
CN
China
Prior art keywords
layout
model
preference
network
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211212657.5A
Other languages
Chinese (zh)
Inventor
吴向阳
刘小芝
金征雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruicheng Information Technology Co ltd
Hangzhou Dianzi University
Original Assignee
Hangzhou Ruicheng Information Technology Co ltd
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruicheng Information Technology Co ltd, Hangzhou Dianzi University filed Critical Hangzhou Ruicheng Information Technology Co ltd
Priority to CN202211212657.5A priority Critical patent/CN115544239A/en
Publication of CN115544239A publication Critical patent/CN115544239A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles

Abstract

Human preference for which layout algorithm generates layouts includes not only the complexity of the human visual perception and cognitive system, but also the diversity of the graph topologies. In order to systematically analyze the preference of human beings on the layout, the invention discloses a layout preference prediction method based on a graph neural network. In the feature extraction stage, extracting neighborhood information of each node by using a local feature network, and analyzing a topological structure and node information of the layout by using multilayer convolution and pooling operations to obtain global features of the layout; considering the importance of the edge features in the layout, the invention also fuses the information of the edge in the convolution process through the edge feature network; in the prediction stage, the global feature vectors output by the two networks are spliced and input into the Simames network to analyze the difference between the two vectors, so that the preference of the layout is predicted.

Description

Deep learning model-based layout preference prediction method
Technical Field
The invention relates to the field of diagram layout evaluation, in particular to a diagram layout preference prediction method based on deep learning (a diagram neural network).
Background
With the development of information technology and the continuous emergence of various social networks, complex network analysis has become a popular topic of research in recent decades, and the graph is a common representation form for encoding such network data. The complex relationships are understood by mining and analyzing the graph. However, a problem exists in mining analysis, and graphs are generally stored in a digital form, which greatly limits the understanding of data. Therefore, the relationship and the structural information of the data can be better shown through the visualization of the graph, so that people can understand the relationship among the data more deeply.
A node-link graph is the most common method of graph visualization, where nodes are drawn as points and edges are rendered as line segments. In the past decades, various algorithms for drawing node-link graphs have been proposed, mainly including the following: force-guided placement, dimension-reduced placement, and multi-level placement. The good graph layout can show a large amount of data in a limited space and can also reduce the difficulty of understanding the network structure by a user. The quality of the visualization varies from layout algorithm to layout algorithm, and therefore it is important how to evaluate a good layout.
Generally, the quality of the graph layout is evaluated from an aesthetic point of view, i.e. the graph drawn according to the layout algorithm should follow as much as possible the following criteria: the index of minimum edge intersection number, the index of close spatial position of adjacent points, the index of non-overlapping nodes, the index of minimum angular resolution, the index of minimum stress and the like. Through the constraint of different aesthetic indexes, different types of layouts can be obtained to meet different requirements of users.
The quality of the map layout can also be determined from human preferences. A layout algorithm which accords with the aesthetic sense of people can greatly improve the visual experience of users. To design a layout algorithm that is aesthetically pleasing to the general public, it is known what layouts people like, and why this layout style is a matter of thinking and solving. But this problem is made more challenging by the complexity of the human visual perception and cognitive system. A plurality of past experiments show that human preferences on graph layout are related to indexes such as edge crossing and stress energy, but few people systematically abstract the preferences and apply the preferences to other places, and the development and application of a neural network successfully promote the study on human preference learning.
Disclosure of Invention
In order to predict human preference for graph layout, the present invention proposes a framework based on deep learning. Based on the graph neural network and the Simames model, two feature extraction network structures are designed, so that the structural information of the graph layout can be effectively extracted, and a correct layout-preference mapping function can be fitted.
In order to solve the technical problems, the technical scheme of the invention is as follows: a layout preference prediction method based on a deep learning model comprises the following steps:
s1, acquiring a training data set, including directly collecting a public Rome image data set, sampling a large network by using different algorithms to generate a Sample image data set, and respectively generating layout data for each image by using different layout algorithms;
s2, preprocessing data, including normalization of layout data and labeling of the layout label; labeling corresponding labels for all generated layouts according to layout preference of participants and aesthetic indexes of the layouts, and carrying out standardized processing on all layout data;
s3, building a neural network model, designing two feature extraction network structures based on the graph neural network and the Simames model, and building a preference prediction network according to the two feature extraction network structures;
s4, constructing a neural network model loss, minimizing the difference between a prediction tag and a human preference tag of the model, and optimizing parameters of the neural network model;
s5, training and optimizing a neural network model, migrating to a preference label sample for fine adjustment through an index label sample pre-training model, and optimizing model parameters by using an optimizer;
and S6, storing the model and the model parameters, and storing the model and the model parameters when the training reaches the maximum iteration times or the objective function tends to be stable.
Preferably, in the step S1, a graph with a node number of 50 to 100 is selected from the public graph data set Rome as a first training data set;
sampling a large network to generate a Sample graph with the number of nodes between 100 and 1000 as a second data set;
the different layout methods used include: FR, stress major orientation, SFDP, FA2, pivotMDS.
Preferably, in the step S2, user research is adopted, 30 participants are selected to evaluate and score the different layouts of 500 graphs in each data set to be 1-5, and preference labels are determined according to final scores;
selecting 6 aesthetic indexes, setting weight, calculating index scores of the graph layouts left in each data set, and determining corresponding index labels;
and carrying out standardization processing on all the graph layout data, wherein the standardization processing comprises centralization processing and normalization processing.
Preferably, the preference prediction network in step S3 includes: a feature extraction network and a Siames network;
the feature extraction network includes: a side feature extraction network and a local feature extraction network;
the feature extraction network carries out feature coding on a pair of graph layouts with node features of Nx2 and an adjacency matrix of NxN, and inputs the graph layouts into a Siemes network to output preference values of human beings on the input layouts;
if the preference value is less than 0.5, the first is preferred, whereas the second is preferred.
Preferably, in step S4, the purpose of the loss function is mainly to measure the difference between the predicted value and the layout label, and the cross entropy loss function is selected as the target function of the model.
Preferably, in step S5, a graph layout of the index labels is trained to obtain a pre-trained model, and based on the pre-trained model, the model is fine-tuned by training data of the preference labels.
Preferably, in step S6, the neural network model is trained for 100 cycles, the BatchSize is 32, the optimizer adopts Adam with a learning rate of 0.001, and the activation function of the neural network model is the Relu function.
The invention has the following characteristics and beneficial effects: by adopting the technical scheme, the method takes the node coordinates of a pair of graph layouts as input to predict the preference of human beings to the layouts, and the process is mainly divided into a feature extraction stage and a prediction stage. In the feature extraction stage, extracting neighborhood information of each node by using a local feature network, and analyzing a topological structure and node information of the layout by using multilayer convolution and pooling operation to obtain global features of the layout; considering the importance of the edge features in the layout, the invention also fuses the information of the edge in the convolution process through the edge feature network; in the prediction stage, the global feature vectors output by the two networks before are spliced, the global feature vectors are input into the Simames network to analyze the difference between the two vectors so as to predict the preference of the layout, and finally a preference value from 0 to 1 is output. If the preference value is less than 0.5, the first is preferred, and vice versa.
Drawings
FIG. 1 shows steps of layout preference prediction according to an embodiment of the present invention.
Fig. 2 is a structure of a neural network model in an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further specifically described below by way of specific examples in conjunction with the accompanying drawings.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
The invention provides a layout preference prediction method based on a deep learning model, which comprises the following steps as shown in figure 1:
s1, acquiring a training data set, including directly collecting a public Rome image data set, sampling a large network by using different algorithms to generate a Sample image data set, and respectively generating layout data for each image by using different layout algorithms;
and S2, preprocessing data, which mainly comprises normalization of layout data and label labeling of layout. Labeling corresponding labels for all generated layouts according to layout preference of participants and aesthetic indexes of the layouts, and carrying out standardized processing on all layout data;
s3, constructing a neural network model, designing two feature extraction network structures based on the graph neural network and the Simames model, and constructing a preference prediction network according to the two feature extraction network structures;
s4, constructing a neural network model loss, minimizing the difference between a prediction tag and a human preference tag of the model, and optimizing parameters of the neural network model;
s5, training and optimizing a neural network model, migrating to a preference label sample for fine adjustment through an index label sample pre-training model, and optimizing model parameters by using an optimizer;
and S6, storing the model and the model parameters, and storing the model and the model parameters when the training reaches the maximum iteration times or the objective function tends to be stable.
By adopting the technical scheme, the method takes the node coordinates of a pair of graph layouts as input to predict the preference of human beings on the layouts, and the process is mainly divided into a feature extraction stage and a prediction stage. In the feature extraction stage, extracting neighborhood information of each node by using a local feature network, and analyzing a topological structure and node information of the layout by using multilayer convolution and pooling operations to obtain global features of the layout; considering the importance of the layout edge characteristics, the invention also fuses edge information in the convolution process through an edge characteristic network; in the prediction stage, the global feature vectors output by the two networks are spliced and input into the Simames network to analyze the difference between the two vectors, and finally a preference value from 0 to 1 is output. If the preference is less than 0.5, the first layout is preferred, and vice versa.
Specifically, in step S2, label labeling is performed on the two collected data sets Rome and Sample, including two methods, namely human preference labeling and aesthetic index labeling.
Human preference tagging obtains tags through user research. In the experiment, we selected 30 participants, and scored the respective layouts of 500 graphs in the two data sets. The score rating consists of 1 to 5, where 1 represents no preference for the layout and 5 represents the strongest preference score. Then, we perform a summation operation on the scores obtained by each layout as its final score. Finally, we choose a pair of layouts for each graph respectively and print the corresponding labels according to the obtained scores.
Aesthetic index labeling labels the labels are obtained by the aesthetic index of the layout itself. We selected 6 aesthetic indicators including edge Crossing indicator (cross), stress indicator (Stress), node resolution indicator (nodocclusion), angle resolution indicator (MinimumAngle), edge length indicator (edgelongvariance), and node neighbor pairing indicator (neighpresservation). According to influences of different indexes on layout preference, different basic weights are set. In the past experiments, the preference of human beings on the layout has a large correlation with the stress index and the edge crossing index, so the weight is set to be slightly larger. Because the distribution of each index calculation is different, each index is calculated for all layouts and averaged, and then the basic weight is divided by the corresponding average value to obtain the final weight. We calculate their scores for the remaining layouts in both data sets, choosing a pair of layouts for each graph to be labeled accordingly.
In addition, all the graph data is normalized, scaled into a-1 to 1 space, to obtain a normalized layout.
According to a further configuration of the present invention, as shown in fig. 2, in step S3, the preference prediction network includes a feature extraction network and a prediction network. The feature extraction network can be divided into an edge feature extraction network and a local extraction network, and is mainly responsible for respectively carrying out feature coding on a pair of layouts with node features of Nx 2 and an adjacent matrix of Nx N so as to obtain two pieces of fusion node information, edge information and a global feature vector of a topological structure, and then splicing the two pieces of fusion node information, edge information and global feature vectors, wherein the size of the fusion node information is 512. The prediction network is composed of the Siames model, comprehensively analyzes the global features generated by the input pairs, maps the global features, and predicts which layout in the input pairs is liked by human beings.
Specifically, the feature extraction network takes a layout with a pair of node features of nx2 and an adjacent matrix of nxn as input, and outputs a global feature vector with the size of 512 through the edge feature extraction network and the local extraction network.
As shown in fig. 2, the edge feature extraction network performs edge feature aggregation update on neighborhood information of nodes through a three-layer edge convolution network, performs fusion splicing on output vectors of each layer of convolution, inputs the output vectors into a hierarchical pooling layer to remove a part of nodes, further extracts important features, and finally obtains a final global feature vector through a readout layer.
In detail, the edge convolution network uses an aggregation function to aggregate information of neighbor nodes to obtain an embedded vector of a target node. In order to achieve a differentiated aggregation of the neighbor nodes, the edge convolution network generates a weight for each neighbor from its corresponding edge feature by means of a filter network. The entire edge convolution network can be represented as:
Figure BDA0003874673760000041
wherein the content of the first and second substances,
Figure BDA0003874673760000042
is a characteristic of the t-th level node i. Theta is an element of d t+1 ×d t Is a learnable parameter to map node features, where d t Representing the characteristic length of the node of the input, d t+1 Representing the node characteristic length of the output. Phi is a filter network, here implemented using multi-layer perceptrons (MLPs), inputting the edge feature e ij To output a weight matrix w ij ∈d t+1 ×d t 。e ij Is the edge characteristics of node i and node j, defined as:
Figure BDA0003874673760000043
wherein the content of the first and second substances,
Figure BDA0003874673760000044
in order to perform the splicing operation,
Figure BDA0003874673760000045
representing the euclidean distance of node i and node j.
After passing through the three-layer edge convolution network, the output vectors of each layer of convolution need to be fused and spliced, which can be expressed as:
Figure BDA0003874673760000046
and after obtaining the finally generated hidden vector h, inputting the hidden vector h into a hierarchical pooling layer, wherein SAPOol is used in the invention, and the hidden vector h can simultaneously consider the node characteristics and the topological structure of the graph to obtain the hierarchical representation of the graph. SAGPool obtains the fractional vector through the graph convolution network and uses a self-attention mechanism to decide the nodes that need to be reserved. The process can be expressed as follows:
Z=σ(GCN(h,A))
Figure BDA0003874673760000047
Z mask =Z idx
wherein, sigma represents an activation function, rank function mainly takes the first kappa N important features in Z as reserved nodes, idx represents subscripts of the reserved nodes, and finally Z is output mask
Finally, we obtain the final global feature vector Z through the readout layer edge . The present invention uses a global pooling layer in that it operates on all nodes in a graph and generates a total embedding of fixed length. The graph data sets with various structures and different numbers of nodes can be directly trained, and subsequent graph classification tasks and other further work are facilitated. The invention applies average pooling and maximum pooling operations and compares the resultsAnd splicing the two as subsequent tasks.
As shown in fig. 2, the local extraction network firstly obtains the feature vector of each node through the directional distance extraction module, then similarly, after passing through the three-layer self-attention graph convolution network, the local extraction network is fused and spliced, and then the local extraction network is sequentially input into the hierarchical pooling layer, and the readout layer outputs the global feature vector.
In detail, the direction distance extraction module is used for making up for the defect that the node coordinates are used as initial characteristics and are easily influenced by operations such as rotation, translation, scaling and the like. The invention provides a method for using Euclidean distance and direction between nodes as initial characteristics of the nodes. The calculation of Euclidean distance is obtained through the coordinates of the nodes, and the calculation of direction is obtained by removing the Euclidean distance between the two nodes by using the relative positions of the two nodes. Because the data set has graphs with various structures and different node numbers, if the distance and the direction are taken as node features, the node feature length of each graph is different, and the graph neural network cannot be used for training together.
Therefore, the invention adopts a neighbor sampling method to sample the neighbors of each node. Through experiments, the sampling number of each node is set to be 30, and for the node with the number of neighbors less than 30, the method samples from the second-order or higher-order neighbors. For sampled nodes, the invention needs to sort them, and the sorting rule is as follows: firstly, according to the graph theory distance of the neighbor nodes, the closer the neighbor nodes are to the central node, the higher the priority is. If they are the same, the priority is set according to the euclidean distance between the nodes, and the closer the priority is, the higher the priority is.
After the ordering is carried out, the feature vector of the central node is initialized according to the neighbor nodes, the feature vector is formed by splicing the graph theory distance, the Euclidean distance and the direction of each adjacent node, and finally, the feature length of each node of each graph is 120. The formula is as follows:
Figure BDA0003874673760000048
wherein node j belongs to node iOf the sampling neighbours
Figure BDA0003874673760000049
d ij Represents the graph theory distance, eu ij Denotes the Euclidean distance, di ij The direction is represented and delta represents the feature of each sampled neighbor is spliced. After the initial features of the graph are obtained, the method further extracts the features through convolution of the graph attention network.
In detail, in order to achieve differentiated aggregation of the neighbor nodes, the graph attention network also adopts a self-attention mechanism to calculate the attention coefficients of the central node and the neighbor nodes. The calculation is as follows:
Figure BDA0003874673760000051
wherein W is a weight matrix, converting the input features into high-level features,
Figure BDA0003874673760000052
is a parameter of the single layer feedforward network for the correlation between the output nodes. The output characteristics of the final node are given by the following equation:
Figure BDA0003874673760000053
similarly, after passing through the three-layer graph attention network, the output vectors of each convolution layer are fused and then input into the hierarchical pooling layer and the global pooling layer in sequence, and then the global feature vector Z of the graph is output local . Finally, the invention extracts the output global feature vector of the edge feature extraction network and the local extraction network
Figure BDA0003874673760000054
The concatenation is input into the prediction network.
Specifically, the prediction network is composed of a Siames model, the input is global features output by the two layouts needing to be compared through a feature extraction network, the global features are mapped through comprehensive analysis, and the layout which a human likes in an input pair is predicted. The model is mainly expressed as:
Figure BDA0003874673760000058
wherein the content of the first and second substances,
Figure BDA0003874673760000057
representing operations to compute differences in global features of the layout, such as element subtraction. δ denotes the multi-layer perceptron MLP, σ is the Sigmoid activation function, and converts a pair of global features into a single value ρ ranging from 0 to 1 to predict human preference. When rho<At 0.5, the model predicts that human beings have a greater probability of preferring the first layout, otherwise prefers the second layout.
According to a further configuration of the present invention, the loss function is defined in step S4. It is an object of the invention to train the model to be able to output classes consistent with human preference labels. The parameters of the model are optimized by minimizing the difference of the predicted values and the human preference labels, which can be measured by a two-class cross entropy loss function. Specifically, it can be expressed as:
Figure BDA0003874673760000055
wherein y is i And ρ i The true human preference label and model prediction value for the ith sample, respectively.
In a further configuration of the present invention, in step S5, since training samples labeled by human preferences may be limited, the present invention employs a migration technique to reduce the complexity of sample collection. As mentioned above, human preferences for diagram layout are associated with aesthetic indicators such as edge crossings, stress energies, etc., and index labeled swatches are readily available based on the aesthetic indicators. According to the method, the depth model is pre-trained by using the sample labeled by the index, and then the depth model is adjusted by using the sample pair model labeled by the human preference.
The neural network model is continuously trained for 100 cycles, the BatchSize is 32, and the learning rate is reduced to 80% of the original learning rate every 30 cycles. The optimizer adopts Adam with a learning rate of 0.001, the whole neural network model for predicting preference is realized by Pythrch, and an activation function of the neural network model is a Relu function.
In order to show the effectiveness of the human preference prediction model based on the graph neural network, the prediction capability of the model in different layout algorithms is analyzed through experiments. For the two collected data sets Rome and Sample, the embodiment takes out the Sample labeled with the index label from each data set for pre-training, then fine-tunes the model under a part of samples labeled with the preference labels, and finally tests the rest samples labeled with the preference labels. The embodiment uses quintupling cross-validation to randomly split the data set.
Table 1 comparison of preference prediction results
Figure BDA0003874673760000056
Table 1 shows the prediction accuracy of the model in the layout preference of the Rome and Sample data sets, the accuracy rate is more than 92%, and the model can well predict the layout preference of human beings. Meanwhile, the first table also shows the proportion of the layouts generated by the five layout algorithms which are favored by human beings under the condition of pairwise comparison. The analysis results in that the layout generated by stress algorithm is more popular and the pivotMDS algorithm is least popular. The proportion predicted by the model is also close to the proportion obtained by the investigation of the user, and the effectiveness of the model is proved again.
As shown in table 2, the second row to the fourth row respectively show the prediction accuracy of the borderless feature extraction network, the no-local feature extraction network, and the feature difference module using the stitching operation. The result shows that the edge feature extraction network and the local feature extraction network enhance the feature extraction of the model to the layout, and the subtraction feature difference module perfectly learns the difference between the two vectors.
TABLE 2 ablation analysis
rome sample
Best 0.9534±0.0097 0.9256±0.0144
NoEdgeNetwork 0.9350±0.0105 0.9023±0.0123
NoLocalNetwork 0.9289±0.0094 0.8922±0.0164
ConcatDiff 0.9267±0.0178 0.8801±0.0120
The embodiments of the present invention have been described in detail with reference to the tables, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments, including components thereof, without departing from the principles and spirit of the invention, and still fall within the scope of the invention.

Claims (7)

1. A layout preference prediction method based on a deep learning model is characterized by comprising the following steps:
s1, acquiring a training data set, including directly collecting a public Rome image data set, sampling a large network by using different algorithms to generate a Sample image data set, and respectively generating layout data for each image by using different layout algorithms;
s2, preprocessing data, including normalization of layout data and labeling of the layout label; labeling corresponding labels on all generated layouts according to layout preference of participants and aesthetic indexes of the layouts, and carrying out standardized processing on all layout data;
s3, building a neural network model, designing two feature extraction network structures based on the graph neural network and the Simames model, and building a preference prediction network according to the two feature extraction network structures;
s4, constructing a neural network model loss, minimizing the difference between a prediction label and a human preference label of the model, and optimizing parameters of the neural network model;
s5, training and optimizing a neural network model, migrating to a preference label sample for fine adjustment through an index label sample pre-training model, and optimizing model parameters by using an optimizer;
and S6, storing the model and the model parameters, and storing the model and the model parameters when the training reaches the maximum iteration times or the objective function tends to be stable.
2. The layout preference prediction method based on the deep learning model as claimed in claim 1, characterized in that in step S1, a graph with a node number of 50 to 100 is selected from a public graph dataset Rome as a first training dataset;
sampling a large network to generate a Sample graph with the number of nodes between 100 and 1000 as a second data set;
the different layout methods used include: FR, stress major, SFDP, FA2, pivotMDS.
3. The deep learning model-based layout preference prediction method according to claim 1, wherein in step S2, user research is adopted, 30 participants are selected to rate the different layouts of 500 graphs in each data set to an evaluation score of 1-5, and preference labels are determined according to final scores;
selecting 6 aesthetic indexes, setting weight, calculating index scores of the graph layouts left in each data set, and determining corresponding index labels;
and performing standardization processing on all the graph layout data, including centralization processing and normalization processing.
4. The deep learning model-based layout preference prediction method according to claim 1, wherein the preference prediction network in step S3 comprises: a feature extraction network and a Siames network;
the feature extraction network includes: an edge feature extraction network and a local feature extraction network;
the feature extraction network carries out feature coding on a pair of graph layouts with node features of Nx 2 and an adjacent matrix of Nx N, and inputs the graph layouts into a Siemes network to output preference values of human beings on the input layouts;
if the preference value is less than 0.5, the first is preferred, whereas the second is preferred.
5. The method for predicting layout preference based on deep learning model according to claim 1, wherein in step S4, the objective of the loss function is mainly to measure the difference between the predicted value and the layout label, and the cross entropy loss function is selected as the objective function of the model.
6. The deep learning model-based layout preference prediction method according to claim 1, wherein in step S5, a graph layout of the index labels is trained to obtain a pre-trained model, and based on the pre-trained model, the model is fine-tuned by training data of the preference labels.
7. The method of claim 1, wherein in step S6, the neural network model is trained for 100 cycles, the BatchSize is 32, the optimizer adopts Adam with a learning rate of 0.001, and the activation function of the neural network model is a Relu function.
CN202211212657.5A 2022-09-30 2022-09-30 Deep learning model-based layout preference prediction method Pending CN115544239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211212657.5A CN115544239A (en) 2022-09-30 2022-09-30 Deep learning model-based layout preference prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211212657.5A CN115544239A (en) 2022-09-30 2022-09-30 Deep learning model-based layout preference prediction method

Publications (1)

Publication Number Publication Date
CN115544239A true CN115544239A (en) 2022-12-30

Family

ID=84730754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211212657.5A Pending CN115544239A (en) 2022-09-30 2022-09-30 Deep learning model-based layout preference prediction method

Country Status (1)

Country Link
CN (1) CN115544239A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665865A (en) * 2023-06-13 2023-08-29 爱汇葆力(广州)数据科技有限公司 Information intelligent management method and system for implementing accompanying staff based on big data
CN117198406A (en) * 2023-09-21 2023-12-08 亦康(北京)医药科技有限公司 Feature screening method, system, electronic equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665865A (en) * 2023-06-13 2023-08-29 爱汇葆力(广州)数据科技有限公司 Information intelligent management method and system for implementing accompanying staff based on big data
CN116665865B (en) * 2023-06-13 2023-12-26 爱汇葆力(广州)数据科技有限公司 Information intelligent management method and system for implementing accompanying staff based on big data
CN117198406A (en) * 2023-09-21 2023-12-08 亦康(北京)医药科技有限公司 Feature screening method, system, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN107766894B (en) Remote sensing image natural language generation method based on attention mechanism and deep learning
CN110287960A (en) The detection recognition method of curve text in natural scene image
CN110827543A (en) Short-term traffic flow control method based on deep learning and spatio-temporal data fusion
CN115544239A (en) Deep learning model-based layout preference prediction method
CN111259786A (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN111008337B (en) Deep attention rumor identification method and device based on ternary characteristics
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN112381179A (en) Heterogeneous graph classification method based on double-layer attention mechanism
CN113807176B (en) Small sample video behavior recognition method based on multi-knowledge fusion
CN111949885A (en) Personalized recommendation method for scenic spots
CN109522953A (en) The method classified based on internet startup disk algorithm and CNN to graph structure data
CN115687760A (en) User learning interest label prediction method based on graph neural network
Zorkafli et al. Classification of cervical cancer using hybrid multi-layered perceptron network trained by genetic algorithm
CN115376317A (en) Traffic flow prediction method based on dynamic graph convolution and time sequence convolution network
CN113590971A (en) Interest point recommendation method and system based on brain-like space-time perception characterization
CN113297936A (en) Volleyball group behavior identification method based on local graph convolution network
CN117314006A (en) Intelligent data analysis method and system
CN112529025A (en) Data processing method and device
CN116680578A (en) Cross-modal model-based deep semantic understanding method
WO2018203551A1 (en) Signal retrieval device, method, and program
CN115019342A (en) Endangered animal target detection method based on class relation reasoning
Termritthikun et al. Neural architecture search and multi-objective evolutionary algorithms for anomaly detection
Hassan et al. Optimising deep learning by hyper-heuristic approach for classifying good quality images
CN113822291A (en) Image processing method, device, equipment and storage medium
CN111931416B (en) Hyper-parameter optimization method for graph representation learning combined with interpretability

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310018 no.1158, 2 Baiyang street, Qiantang New District, Hangzhou City, Zhejiang Province

Applicant after: HANGZHOU DIANZI University

Applicant after: Hangzhou Ruicheng Information Technology Co.,Ltd.

Address before: 310018 no.1158, 2 Baiyang street, Qiantang New District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU DIANZI University

Applicant before: Hangzhou Ruicheng Information Technology Co.,Ltd.