CN109471930B - Emotional board interface design method for user emotion - Google Patents

Emotional board interface design method for user emotion Download PDF

Info

Publication number
CN109471930B
CN109471930B CN201811325806.2A CN201811325806A CN109471930B CN 109471930 B CN109471930 B CN 109471930B CN 201811325806 A CN201811325806 A CN 201811325806A CN 109471930 B CN109471930 B CN 109471930B
Authority
CN
China
Prior art keywords
words
design
elements
analysis
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811325806.2A
Other languages
Chinese (zh)
Other versions
CN109471930A (en
Inventor
杨程
杨洋
周宇梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University City College ZUCC
Original Assignee
Zhejiang University City College ZUCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University City College ZUCC filed Critical Zhejiang University City College ZUCC
Priority to CN201811325806.2A priority Critical patent/CN109471930B/en
Publication of CN109471930A publication Critical patent/CN109471930A/en
Application granted granted Critical
Publication of CN109471930B publication Critical patent/CN109471930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Abstract

The invention relates to a method for designing a mood board interface facing user emotion, which comprises the following steps: 1) determining related emotion semantic words which can represent the emotion requirements of the user most through a text analysis method as primitive words; 2) obtaining analysis words of visual mapping and action mapping according to the original words by a machine learning and text analysis method; 3) collecting corresponding images according to the analysis words by using a network; 4) obtaining design elements such as textures, colors, operation dynamic effects and the like according to the images by a clustering analysis method; 5) and designing colors, icons, compositions and the like of the design interface according to the extracted design elements such as the color matching scheme, the pattern elements and the like. The invention has the beneficial effects that: the method comprehensively utilizes the mood plate method, the big data and text analysis technology and various image data processing methods, helps designers, especially novice designers with inexperience, meet the emotional requirements of users more quickly and accurately in the design process of the app interface, and improves the efficiency of obtaining design elements.

Description

Emotional board interface design method for user emotion
Technical Field
The invention relates to the technical field of interface design, in particular to a mood board interface design method facing user emotion.
Background
The emotional board design method is a process of associating fuzzy emotional vocabulary with images through semantic association and extracting design elements from the images for design. The design method of the mood board is divided into a vocabulary concept extraction part and a design application part, and the specific design process of the existing mood board design method is as follows: determining primitive words (i.e., design topics); obtaining a mapping vocabulary according to the association of the original words; classifying the mapping vocabulary according to three directions of visual mapping, psychological mapping and physical mapping; collecting corresponding images according to the mapping vocabulary; and extracting design elements according to the image to carry out design output.
Although the design method of the mood board can promote creation, the design method is found to be applied to the field of interface design in practice, and the problem that the interface design is not comprehensive due to wide applicability of the design method of the mood board is inevitably caused: human association sometimes causes a phenomenon of thinking limitation, and native emotional words cannot easily directly think of mapping words. Meanwhile, the process of extracting design elements from an image may be too abstract for a beginner, lacking concrete design guidance steps. In addition, although the current mood board can be well applied to visual design, the embodiment of interactive animation design is not obvious. In order to better solve the above problems, a computer data processing method can be introduced into the conventional design method to improve the design efficiency.
The text analysis refers to the representation of the text and the selection of characteristic items thereof; text analysis is a basic problem of text mining and information retrieval, and quantifies characteristic words extracted from text to represent text information. Humans can determine which are words and which are not words through their own experience. The computer needs to do this with text analysis techniques.
Cluster analysis refers to an analytical process that groups a collection of physical or abstract objects into classes that are composed of similar objects. It is an important human behavior. The goal of cluster analysis is to collect data on a similar basis for classification. This technical approach is used to describe data, measure similarities between different data sources, and classify data sources.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for designing a mood board interface facing user emotion, and aims to enhance the accuracy of the mood board interface design method on user emotion expression and provide a more detailed guidance method so as to improve the design efficiency.
The method for designing the mood board interface facing the user emotion comprises the following steps:
1) determining related emotion semantic words which can represent the emotion requirements of the user most through a text analysis method as primitive words;
2) obtaining analysis words of visual mapping and action mapping according to the original words by a machine learning and text analysis method;
3) collecting corresponding images according to the analysis words by using a network; extracting the selected picture to generate a mood board; the picture is used as the mood board visual mapping part to facilitate extraction of visual design elements, and the image is used as the mood board action mapping part to facilitate extraction of interactive animation design elements;
4) obtaining design elements such as textures, colors, operation dynamic effects and the like according to the images by a clustering analysis method;
5) and designing colors, icons, compositions and the like of the design interface according to the extracted design elements such as the color matching scheme, the pattern elements and the like.
The specific process of the step 1) is as follows:
a. a large number of online comments of consumers to the APP are obtained through the web crawler system, and are sorted, and repeated comments and meaningless comments are removed.
b. By means of an LDA text analysis technology of an implicit Dirichlet distributed theme extraction method, an LDA module in a python packet scimit-lean is selected to extract the theme of online comment data, adjectives are selected as the emotional words of a product concerned by a user, and the adjectives with the frequency of occurrence three before ranking are original words.
The specific process of obtaining the analysis words of the visual mapping and the action mapping in the step 2) is as follows:
a. the corpus is input into a Word2vec model, words are converted into vectors through computer training, the association degree of the semantic words is higher, the cosine similarity of the vectors is higher, and the cosine distance is larger. And determining cosine distances to obtain three vocabularies with the highest association degree with the native words. Inputting a plurality of words, wherein the words comprise C words x1,x2,...,xCEach xxAre all represented by one-hot. So as to conceal h of the layerhIs to take WWThe word vectors of all C words entered in, then averaged as follows:
Figure GDA0003171004040000021
the process from the hidden layer to the output layer comprises the definition of an objective function, the back propagation training and the like, and all the rows still need to be updated every time:
Figure GDA0003171004040000022
gradient of hidden layer neurons:
Figure GDA0003171004040000023
h is to behIs evenly spread over each word, so each training updates WWLine C in (1):
Figure GDA0003171004040000031
after the word vectors are obtained, 3 word vectors with the largest cosine similarity with the original word vectors are found each time, the corresponding words are the words with the highest semantic relevance, and the cosine distance is calculated as follows:
Figure GDA0003171004040000032
b. a plurality of nouns and action nouns are selected as analysis words through an LDA module text analysis method in a python pack scinit-spare. The noun analysis words are used as visual mapping of the primitive words, and the action noun analysis words are used as action mapping of the primitive words.
The specific process of the step 4) is as follows:
a. pattern extraction: analyzing a higher-frequency visual image material in the mood board through an algorithm, dividing a source texture image into independent and unconnected image sub-block regions by adopting a color K-means algorithm, and constructing a connection relation between the image sub-block regions so as to capture texture elements with appearance similar characteristics to generate the texture material;
the K-means algorithm divides N samples into K clusters C ═ C1,C2,C3…CKThe samples in the clusters have higher similarity, and the samples between the clusters have lower similarity; if V is { V ═ V1,V2,…Vk},
Class centers for K classes, where VKIs the number CKSample mean values in clusters, each cluster can be represented by a corresponding class prototype, and the K-means algorithm divides data by minimizing a function of the sum of squared errors in a class and a criterion, wherein the function is represented as follows:
Figure GDA0003171004040000033
where C isKContaining all class centers V to KthKThe sample point with the smallest distance is described as follows:
Figure GDA0003171004040000034
extracting design elements from the texture material in modes of simplification, deformation, stretching and the like to be used as an icon part of interface design;
b. color extraction: respectively extracting color mode elements RGB (red, green and blue) of all images in the visual mood board to perform a K-means clustering analysis method with one to one picture, and extracting a plurality of characteristic colors which can represent comprehensive color image of the visual picture of the whole mood board; establishing a color matching network between the characteristic colors: establishing connection between nodes based on co-occurrence frequency in the same graph by taking the characteristic color as a node and the occurrence frequency of the characteristic color as a node weight, thereby forming a color matching scheme;
c. and (3) animation extraction: and performing behavior and animation combined expression through a metaphor according to the video clips appearing in the action mapping to build corresponding emotional experience.
The method has the advantages that the method comprehensively utilizes the mood plate method, the big data and text analysis technology and various image data processing methods, helps designers, especially novice designers with inexperience, meet the emotional requirements of users more quickly and accurately in the app interface design process, and improves the efficiency of obtaining design elements.
Drawings
FIG. 1 is a flow chart of the steps of the present invention.
Fig. 2 shows case analysis word acquisition according to the present invention.
Fig. 3 is a diagram of case design effect of the patent of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples. The following examples are set forth merely to aid in the understanding of the invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Example (b): the method is used for optimizing the game e-commerce micro-auction app and comprises the following steps:
(1) determining primitive words
Firstly, crawling forums by using a web crawler system Ppyspider, knowing that users in a plurality of channels comment the 'micro-auction app' on line, sorting the comments, and removing repeated comments and meaningless comments. Then, through an LDA text analysis technology which implies a Dirichlet distribution theme extraction method, an LDA module in a python package scibit-lean is selected to extract the theme of online comment data, adjectives are selected as the product emotion words concerned by a user, wherein the adjectives with the frequency of occurrence of the first three in rank are selected as primitive words, and the derivative words selected in the case are 'genuine', 'classical' and 'valuable'.
(2) Analytic word
Inputting the corpus into a Word2vec model, and converting the words of 'genuine', 'classical' and 'valuable' into vectors through computer training, wherein the higher the relevancy of the semantic words is, the shorter the cosine distance of the vectors is, and the higher the relevancy of the semantic words is, the shorter the cosine distance is according toThe requirement determines the vector distance to find the vocabulary associated with the three native words. X is a plurality of words in input corpus, wherein the words in the input corpus have C words1,x2,...,xCEach xxAre all represented by one-hot. So as to conceal h of the layerhIs to take WWThe word vectors of all C words entered in, then averaged as follows:
Figure GDA0003171004040000051
the process from the hidden layer to the output layer comprises the definition of an objective function, the back propagation training and the like, and all the rows still need to be updated every time:
Figure GDA0003171004040000052
gradient of hidden layer neurons:
Figure GDA0003171004040000053
h is to behIs evenly spread over each word, so each training updates WWLine C in (1):
Figure GDA0003171004040000054
after the word vectors are obtained, 3 word vectors with the highest string similarity with the original word vectors are found each time, the corresponding words are the words with the highest semantic association degree, and the cosine distance is calculated as follows:
Figure GDA0003171004040000055
wherein, we obtain the vectors of the 'classical' words as (0.74,0.18,0.88,0.44,0.25) through training, the three words with the highest string similarity are 'antique', 'famous calligraphy and painting' and 'royal' respectively, and the corresponding cosine distances are 0.9354813,0.9306384 and 0.9281675 respectively, thereby determining that the words with the highest degree of association with red are 'antique', 'famous calligraphy and painting' and 'royal'. Then, a plurality of nouns and dynamic nouns are selected as analysis words by an LDA module text analysis method in the python package scimit-lean. The noun analysis words are used as visual mapping of the primitive words, and the action noun analysis words are used as action mapping of the primitive words. The resulting analysis words are shown in FIG. 2.
(3) Visual mapping, motion mapping
Searching pictures and images according to the obtained analysis words, fully utilizing a network to collect corresponding pictures, and then extracting the selected pictures to generate the mood board close to the auction industry. And respectively making a visual mapping part and an animation action mapping part of the interface according to the functions of the APP.
(4) Design factor extraction
The design element extraction is performed according to the design method set forth above. The auction hammer material in the 'genuine' mood board is analyzed through an algorithm, the source texture image is divided into independent and disconnected image sub-block regions by adopting a color K-means algorithm, and the connection relation among the image sub-block regions is constructed according to the independent and disconnected image sub-block regions, so that the appearance of the auction hammer is captured and used as a brand LOGO.
The K-means algorithm divides N samples in the image into K clusters C ═ C1,C2,C3…CKAnd (4) enabling intra-cluster samples to have higher similarity and inter-cluster samples to have lower similarity. In the method, K-5 can be satisfied for most images, and V-V1,V2,…V5H, class centers corresponding to 5 classes, where V5Is the number C5Sample mean values in clusters, each cluster can be represented by a corresponding class prototype, and the K-means algorithm divides data by minimizing a function of the sum of squared errors in a class and a criterion, wherein the function is represented as follows:
Figure GDA0003171004040000061
where C is5Containing all to 5 th class centers V5And (4) sample points with the minimum distance, so that auction hammer image contour elements in the image are calculated by classifying the image pixels through a K-means algorithm.
The background is used for expressing historical culture by using the yellow rice paper texture in the 'classical' mood board in the same method, the patterns of the antique bottle body are selected to generate patterns for expressing noble feeling, and the texture materials are used as the icon part of the interface design.
And respectively extracting color mode elements RGB (red, green and blue) of all images in the visual mood board to perform a K-means clustering analysis method with the images, and extracting the characteristic colors such as deep red, black, beige and the like which can represent the comprehensive color image of the visual image of the whole mood board. Establishing a color matching network between the characteristic colors: and establishing connection between nodes based on co-occurrence frequency in the same graph by taking the characteristic color as a node and the occurrence frequency of the characteristic color as a node weight, thereby extracting deep red and beige to form a main color and an auxiliary color of the interface design. And taking the deep red with the highest occurrence frequency as a navigation color matching.
In the interactive animation design, a small animation for opening a treasure box is designed on a random treasure panning page, and an auction hammer interactive action is designed at a position for deciding to beat a baby button, namely, the button is changed from a common state to a state of knocking the auction hammer when a key is pressed. Thereby extracting a plurality of interface design elements.
(5) Design interface
The designer performs visual design and interactive animation design according to the design elements extracted from the mood board, and the visual effect of the designed and modified partial page is shown in fig. 3. Wherein, a plurality of places use the design elements obtained from the mood board to carry out design and are matched with the theme functions of the product.

Claims (3)

1. A method for designing a mood board interface facing user emotion is characterized by comprising the following steps: the method comprises the following steps:
1) determining related emotion semantic words which can represent the emotion requirements of the user most through a text analysis method as primitive words;
2) obtaining analysis words of visual mapping and action mapping according to the original words by a machine learning and text analysis method;
3) collecting corresponding images according to the analysis words by using a network; extracting the selected picture to generate a mood board; the picture is used as the mood board visual mapping part to facilitate extraction of visual design elements, and the image is used as the mood board action mapping part to facilitate extraction of interactive animation design elements;
4) obtaining texture, color and operation dynamic effect design elements according to the images by a clustering analysis method;
5) designing colors, icons and compositions of a design interface according to the extracted color matching schemes and pattern element design elements;
the specific process of the step 1) is as follows:
a. acquiring a large number of online comments of consumers on the APP through a web crawler system, and sorting the online comments to remove repeated comments and meaningless comments;
b. by means of an LDA text analysis technology of an implicit Dirichlet distributed theme extraction method, an LDA module in a python packet scimit-lean is selected to extract the theme of online comment data, adjectives are selected as the emotional words of a product concerned by a user, and the adjectives with the frequency of occurrence three before ranking are original words.
2. The method for designing a mood board interface for user emotion according to claim 1, wherein: the specific process of obtaining the analysis words of the visual mapping and the action mapping in the step 2) is as follows:
a. inputting the corpus into a Word2vec model, and converting words into vectors through computer training, wherein the higher the association degree of semantic words is, the greater the cosine similarity of the vectors is, and the greater the cosine distance is; three vocabularies with the highest degree of association with the native words are obtained by determining the cosine distance; inputting a plurality of words, wherein the words comprise C words x1,x2,...,xCEach xxAre all represented by one-hot; so as to conceal h of the layerhIs to take WWThe word vectors of all C words entered in, then averaged as follows:
Figure FDA0003171004030000011
the process from the hidden layer to the output layer comprises the definition of an objective function and the back propagation training, and all the rows still need to be updated every time:
Figure FDA0003171004030000012
gradient of hidden layer neurons:
Figure FDA0003171004030000021
h is to behIs evenly spread over each word, so each training updates WWLine C in (1):
Figure FDA0003171004030000022
after the word vectors are obtained, 3 word vectors with the largest cosine similarity with the original word vectors are found each time, the corresponding words are the words with the highest semantic relevance, and the cosine distance is calculated as follows:
Figure FDA0003171004030000023
b. selecting a plurality of nouns and dynamic nouns as analysis words by an LDA module text analysis method in a python pack scinit-spare; the noun analysis words are used as visual mapping of the primitive words, and the action noun analysis words are used as action mapping of the primitive words.
3. The method for designing a mood board interface for user emotion according to claim 1, wherein: the specific process of the step 4) is as follows:
a. pattern extraction: analyzing a higher-frequency visual image material in the mood board through an algorithm, dividing a source texture image into independent and unconnected image sub-block regions by adopting a color K-means algorithm, and constructing a connection relation between the image sub-block regions so as to capture texture elements with appearance similar characteristics to generate the texture material;
the K-means algorithm divides N samples into K clusters C ═ C1,C2,C3…CKThe samples in the clusters have higher similarity, and the samples between the clusters have lower similarity; if V is { V ═ V1,V2,…Vk},
Class centers for K classes, where VKIs the number CKSample mean values in clusters, each cluster can be represented by a corresponding class prototype, and the K-means algorithm divides data by minimizing a function of the sum of squared errors in a class and a criterion, wherein the function is represented as follows:
Figure FDA0003171004030000024
where C isKContaining all class centers V to KthKThe sample point with the smallest distance is described as follows:
Figure FDA0003171004030000031
extracting design elements from the texture material in a simplifying, deforming and stretching mode to be used as an icon part of interface design;
b. color extraction: respectively extracting color mode elements RGB (red, green and blue) of all images in the visual mood board to perform a K-means clustering analysis method with one to one picture, and extracting a plurality of characteristic colors which can represent comprehensive color image of the visual picture of the whole mood board; establishing a color matching network between the characteristic colors: establishing connection between nodes based on co-occurrence frequency in the same graph by taking the characteristic color as a node and the occurrence frequency of the characteristic color as a node weight, thereby forming a color matching scheme;
c. and (3) animation extraction: and performing behavior and animation combined expression through a metaphor according to the video clips appearing in the action mapping to build corresponding emotional experience.
CN201811325806.2A 2018-11-08 2018-11-08 Emotional board interface design method for user emotion Active CN109471930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811325806.2A CN109471930B (en) 2018-11-08 2018-11-08 Emotional board interface design method for user emotion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811325806.2A CN109471930B (en) 2018-11-08 2018-11-08 Emotional board interface design method for user emotion

Publications (2)

Publication Number Publication Date
CN109471930A CN109471930A (en) 2019-03-15
CN109471930B true CN109471930B (en) 2021-09-14

Family

ID=65672122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811325806.2A Active CN109471930B (en) 2018-11-08 2018-11-08 Emotional board interface design method for user emotion

Country Status (1)

Country Link
CN (1) CN109471930B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368351A (en) * 2020-03-30 2020-07-03 西安理工大学 Emotional design method for product CMF under perceptual information' grafting-mapping
CN112597695B (en) * 2020-12-03 2022-05-03 浙江大学 Computer aided design method and system based on perceptual feature clustering
CN112883684B (en) * 2021-01-15 2023-07-07 王艺茹 Information processing method of multipurpose visual transmission design

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130075124A (en) * 2011-12-27 2013-07-05 숭실대학교산학협력단 Apparatus and method for analyzing emotion by extracting emotional word of text, and recording medium storing program for executing method of the same in computer
CN104111976A (en) * 2014-06-24 2014-10-22 海南凯迪网络资讯有限公司 Method and device for network speech emotion attitude localization
CN107464188A (en) * 2017-06-23 2017-12-12 浙江大学 A kind of internet social networking application system based on Internet of Things mood sensing technology
CN107578092A (en) * 2017-09-01 2018-01-12 广州智慧城市发展研究院 A kind of emotion compounding analysis method and system based on mood and opinion mining
CN107590197A (en) * 2017-08-15 2018-01-16 维沃移动通信有限公司 A kind of Music front cover generation method and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130075124A (en) * 2011-12-27 2013-07-05 숭실대학교산학협력단 Apparatus and method for analyzing emotion by extracting emotional word of text, and recording medium storing program for executing method of the same in computer
CN104111976A (en) * 2014-06-24 2014-10-22 海南凯迪网络资讯有限公司 Method and device for network speech emotion attitude localization
CN107464188A (en) * 2017-06-23 2017-12-12 浙江大学 A kind of internet social networking application system based on Internet of Things mood sensing technology
CN107590197A (en) * 2017-08-15 2018-01-16 维沃移动通信有限公司 A kind of Music front cover generation method and mobile terminal
CN107578092A (en) * 2017-09-01 2018-01-12 广州智慧城市发展研究院 A kind of emotion compounding analysis method and system based on mood and opinion mining

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
情绪板在交互设计中的应用研究;郑秋荣,李世国;《包装工程》;20091130;第30卷(第11期);全文 *
情绪板在软件界面设计中的应用;陈亮;《硅谷》;20130131;第5卷(第122期);全文 *
手机APP界面的情感化设计研究;赵亚伟;《中国优秀硕士学位论文全文数据库信息科技辑》;20150115(第1期);全文 *

Also Published As

Publication number Publication date
CN109471930A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
Kim et al. Building emotional machines: Recognizing image emotions through deep neural networks
Zhao et al. Open vocabulary scene parsing
CN107766933B (en) Visualization method for explaining convolutional neural network
CN108595636A (en) The image search method of cartographical sketching based on depth cross-module state correlation study
Li et al. Spectral clustering in heterogeneous information networks
CN112417306B (en) Method for optimizing performance of recommendation algorithm based on knowledge graph
CN109471930B (en) Emotional board interface design method for user emotion
CN107862561A (en) A kind of method and apparatus that user-interest library is established based on picture attribute extraction
CN111324765A (en) Fine-grained sketch image retrieval method based on depth cascade cross-modal correlation
Fernandez-Beltran et al. Incremental probabilistic latent semantic analysis for video retrieval
CN110599592A (en) Three-dimensional indoor scene reconstruction method based on text
CN110147552B (en) Education resource quality evaluation mining method and system based on natural language processing
Gao et al. Fashion clothes matching scheme based on Siamese Network and AutoEncoder
Rematas et al. Dataset fingerprints: Exploring image collections through data mining
TW201820172A (en) System, method and non-transitory computer readable storage medium for conversation analysis
Li et al. Multi-modal visual adversarial Bayesian personalized ranking model for recommendation
Jiang et al. Visual font pairing
Zhan et al. DeepShoe: An improved Multi-Task View-invariant CNN for street-to-shop shoe retrieval
Tautkute et al. What looks good with my sofa: Multimodal search engine for interior design
CN108446605B (en) Double interbehavior recognition methods under complex background
Goyal et al. A Review on Different Content Based Image Retrieval Techniques Using High Level Semantic Feature
Chen et al. Exploiting aesthetic features in visual contents for movie recommendation
CN113191381B (en) Image zero-order classification model based on cross knowledge and classification method thereof
Vaca-Castano et al. Holistic object detection and image understanding
Lin et al. A virtual reality based recommender system for interior design prototype drawing retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220719

Address after: 310015 No. 51, Huzhou street, Hangzhou, Zhejiang

Patentee after: Zhejiang University City College

Address before: 310015 No. 50 Huzhou Street, Hangzhou City, Zhejiang Province

Patentee before: Zhejiang University City College

TR01 Transfer of patent right