CN112528074B - Movie recommendation method combining knowledge graph with self-encoder - Google Patents
Movie recommendation method combining knowledge graph with self-encoder Download PDFInfo
- Publication number
- CN112528074B CN112528074B CN202011470189.2A CN202011470189A CN112528074B CN 112528074 B CN112528074 B CN 112528074B CN 202011470189 A CN202011470189 A CN 202011470189A CN 112528074 B CN112528074 B CN 112528074B
- Authority
- CN
- China
- Prior art keywords
- self
- encoder
- film
- movie
- dimension
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a movie recommendation method combining knowledge maps with a self-encoder, which comprises the following steps: 1, acquiring a film language category of a film from a knowledge graph DBpedia as additional information; 2, vectorizing the obtained movie language category information by using a multi-hot method and taking the vectorized representation as an initial feature extension; 3, using the self-encoder to reduce the dimension of the obtained initial feature expansion, and taking the output of the encoding layer of the self-encoder as the extracted low-dimension feature representation; and 4, fusing the obtained low-dimensional feature representation into an original feature space of the film, and inputting new features into a semi-self-encoder model as additional information to realize more accurate film recommendation. The invention can utilize the knowledge graph to perform feature expansion on the movie information, and process the expansion features through the self-encoder to obtain high-level low-dimensional feature representation so as to be input into a recommendation model for prediction, thereby achieving the purpose of more accurately recommending for users.
Description
Technical Field
The invention relates to the field of personalized data recommendation research, in particular to a movie recommendation method combining knowledge maps with a self-encoder.
Background
Today, as data information explodes, people face a large number of services and merchandise choices, and it has also been more difficult for users to efficiently find information and products useful to themselves. The recommendation system can help people alleviate the problem of information overload. Among the various recommendation methods, collaborative filtering recommendation algorithms have achieved significant success in the last decades. But suffers from the sparse and weak generalization of the scoring matrix. In order to solve these problems, matrix decomposition techniques have been proposed, the main operation being to learn hidden features of users or commodities from a scoring matrix to optimize recommendation accuracy, and have been successful in practical applications.
However, the matrix factorization method has a limitation in feature representation learning ability because most methods learn feature representations of users and commodities without considering additional information by directly factoring scoring matrices. Stimulated by the superior performance of deep learning in terms of feature representation learning, more and more deep learning-based recommendation systems are proposed. The deep learning recommendation system is able to more effectively extract features of users and goods than the conventional recommendation system. Among all the recommendation methods based on deep learning, the method based on an automatic encoder has the advantages of no need of labels, high convergence speed, good effectiveness and the like, and is widely focused. Although the methods can utilize the additional information of users or commodities, the auxiliary information is difficult to obtain from other information sources, most of the auxiliary information is sparse, and the auxiliary information is difficult to be effectively and directly integrated into a recommendation system to improve the recommendation precision.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a film recommendation method combining a knowledge graph with a self-encoder, wherein effective auxiliary information is supplemented through the knowledge graph, the self-encoder is used for realizing the dimension reduction processing of the obtained sparse information, and the extracted low-dimensional characteristic representation is used as final auxiliary information to be fused into a recommendation system, so that the aim of improving the recommendation accuracy of personalized films is fulfilled.
The purpose of the invention is realized in the following way: a film recommendation method combining knowledge graph with self-encoder is characterized by comprising the following steps:
step 1, obtaining film language categories of all films from a knowledge graph DBpedia as additional information;
step 2, vectorizing the obtained movie language category information by using a multi-hot method to be used as an initial feature extension;
step 3, using a self-encoder to reduce the dimension of the obtained initial feature expansion, and taking the output of the encoding layer of the self-encoder as the extracted low-dimension feature representation;
and 4, fusing the obtained low-dimensional feature representation into an original feature space of the film, and inputting new features into a semi-self-encoder model as additional information to realize more accurate recommendation.
As a further limitation of the present invention, the step 1 specifically includes:
step 1.1, using an api interface of DBpedia to search information of films in a data set, wherein Queryclass in the sentence interface is set as film, and QueryString is set as film name;
and step 1.2, screening the searched webpage results by using a regularization formula "< Label >" (a-zA-Z) and extracting the required film language category information.
As a further definition of the present invention, the step 2 specifically includes: one movie can correspond to multiple language categories, and language category information obtained by each movie is vectorized by using a multi-hot method to obtain a language category vector l corresponding to movie i i ∈R K The dimension K of the vector is the total number of the movie language categories extracted in the step 1, and the result is used as an initial feature extension.
As a further definition of the present invention, the step 3 specifically includes:
step 3.1, training the self-encoder model through the formula (1) and the formula (2), and reserving model parameters:
ξ l =f(L I ·Q l +p l ) (1)
L′ I =g(ξ l ·Q′ l +p′ l ) (2)
wherein L is I ∈R m×K Is language class vector corresponding to m movies, Q l ∈R K×h ,Q′ l ∈R h×K Is the weight matrix of the self-encoder, p l ∈R h ,p′ l ∈R K Is the bias vector, h is the dimension of the hidden layer of the self-encoder, in order to avoid modelOverfitting, use l 2 The norms regularize the model parameters, and the objective function of the training self-encoder is as follows:
wherein the method comprises the steps ofIndicating that only samples with film language type characteristics are considered in the training process;
step 3.2 after model training is completed, L can be obtained by equation (4) I Is represented by low-dimensional features of E I Representing the low latitude characteristic representation, equation (4) is as follows:
E I =f(L I ·Q l +p l ) (4)。
as a further definition of the present invention, the step 4 specifically includes:
step 4.1 after the feature expansion is obtained in step 3.2, the feature expansion matrix E is used I ∈R m×h Partial observation scoring matrix r for all movies I ∈R m×n Additional information A in the data set I ∈R m×y Splicing, y is the dimension of the additional information, denoted cat (E I ,r I ,A I )∈R m×(h+n+y) ,
Step 4.2 the cat (E) obtained in step 4.1 I ,r I ,A I ) Input as input data into the semi-self-encoder model is achieved by equation (6):
r′ I =f(g(cat(E I ,r I ,A I )·Q+b)·Q′+b 1 ) (6)
wherein Q εR (h+n+y)×H ,Q′∈R H×n Is a weight matrix, b.epsilon.R H ,b 1 ∈R n Is an offset vector, while the hidden layer dimension of the semi-self encoder is set to H, while in order to avoid model overfitting, use l 2 The norms are regularized to parameters, and the final objective function is shown in formula (7):
step 4.3 finally obtaining a final predicted scoring matrix by equation (8):
R′=f(g(cat(E I ,r I ,A I )·Q+b)·Q′+b 1 ) (8)。
compared with the prior art, the invention adopts the technical scheme and has the beneficial effects that: 1) The invention uses the self-encoder model to learn the characteristic representation based on commodity, and the self-encoder model has the characteristics of high convergence speed, no need of labels and good effectiveness, so that the method has stronger robustness and practicability;
2) According to the method, the knowledge graph is used for carrying out feature expansion on the movie information, and meanwhile, the additional features obtained through expansion are extracted to be represented by the low-dimensional features through the self-encoder, so that the method can be more conveniently and flexibly applied to a recommendation model;
3) The semi-self-encoder model used in the invention can effectively use the additional information to obtain better and more abstract commodity characteristics, and the information obtained by knowledge graph expansion after the self-encoder processing can promote the validity of characteristic representation, thereby improving the accuracy of model recommendation.
Drawings
Figure 1 is a general frame diagram of the present invention.
FIG. 2 is a schematic diagram of a self-encoder model in the present invention.
FIG. 3 is a schematic diagram of a semi-self-encoder model according to the present invention.
Detailed Description
The method for recommending the film by combining the knowledge graph with the self-encoder shown in fig. 1 comprises the following steps:
step 1, obtaining film language categories of all films from a knowledge graph DBpedia as additional information;
step 2, vectorizing the obtained movie language category information by using a multi-hot method to be used as an initial feature extension;
step 3, using a self-encoder to reduce the dimension of the obtained initial feature expansion, and taking the output of the encoding layer of the self-encoder as the extracted low-dimension feature representation;
and 4, fusing the obtained low-dimensional feature representation into an original feature space of the film, and inputting new features into a semi-self-encoder model as additional information to realize more accurate recommendation.
The method comprises the following steps:
step 1, obtaining the film language category of all films from the knowledge graph DBpedia as additional information:
step 1.1, using the api interface of DBpedia, searching information on movies in the dataset,http:// lookup.dbpedia.org/api/seacrch/KeywordSearchQueryClass=film&QueryString= filmnamein the statement interface, queryClass is set as film, and QueryString is set as film name;
and step 1.2, screening the searched webpage results by using a regularization formula, and extracting required film language category information.
Step 2, vectorizing the obtained movie language category information by using a multi-hot method, and taking the vectorized representation as an initial feature extension: one movie can correspond to multiple language categories, and language category information obtained by each movie is vectorized by using a multi-hot method to obtain a language category vector l corresponding to movie i i ∈R K The dimension K of the vector is the total number of the movie language categories extracted in the step 1, and the result is used as an initial feature extension.
Step 3, designing a basic self-encoder according to the idea of fig. 2, performing dimension reduction on the obtained initial feature expansion, and taking the encoding layer output of the self-encoder as an extracted low-dimension feature representation:
step 3.1, training the self-encoder model through the formula (1) and the formula (2), and reserving model parameters:
ξ l =f(L I ·Q l +p l ) (1)
L′ I =g(ξ l ·Q′ l +p′ l ) (2)
wherein L is I ∈R m×K Is the language class vector corresponding to m films, which is used as the input data of the self-encoder, Q l ∈R K ×h ,Q′ l ∈R h×K Is the weight matrix of the self-encoder, p l ∈R h ,p′ l ∈R K Is the bias vector and h is the dimension of the self-encoder hidden layer. To avoid model overfitting, use l 2 The norms regularize the model parameters, and the objective function of the training self-encoder is as follows:
wherein the method comprises the steps ofIndicating that only samples with features of the movie language category are considered during the training process.
Step 3.2 after model training is completed, L can be obtained by equation (4) I Is represented by low-dimensional features of E I The expression low latitude characteristic expression is performed, and the expression (4) is as follows:
E I =f(L I ·Q l +p l ) (4)
step 4, fusing the obtained low-dimensional feature representation into the original feature space of the film, and inputting new features into the semi-self-encoder model as additional information to realize more accurate recommendation:
step 4.1 after the feature expansion is obtained in step 3.2, the feature expansion matrix E is used I ∈R m×h Partial observation scoring matrix r for all movies I ∈R m×n Additional information A in the data set I ∈R m×y (y is the dimension of the additional information) is spliced and denoted as cat (E) I ,r I ,A I )∈R m×(h+n+y) 。
Step 4.2 the cat (E) obtained in step 4.1 I ,r I ,A I ) Is input as input data to the semi-self-encoder. As shown in fig. 3, the semi-self encoder is different from the conventional self encoder in that the dimension of the input layer is greater than that of the output layer, so that additional feature information can be fused at the input end. The specific implementation is realized by a formula (6):
r′ I =f(g(cat(E I ,r I ,A I )·Q+b)·Q′+b 1 ) (6)
wherein Q εR (h+n+y)×H ,Q′∈R H×n Is a weight matrix, b.epsilon.R H ,b 1 ∈R n Is an offset vector, while the hidden layer dimension of the semi-self encoder is set to H, while in order to avoid model overfitting, use l 2 The norms are regularized to parameters, and the final objective function is shown in formula (7):
step 4.3 obtaining a final predictive scoring matrix by equation (8):
R′=f(g(cat(E I ,r I ,A I )·Q+b)·Q′+b 1 ) (8)。
the invention can be further illustrated by the following experiments:
to test the effectiveness of the present invention, the prediction results were implemented on MovieLens 100K and MovieLens 1M datasets, respectively, where the MovieLens 100K dataset included 100000 scores for 1682 movies by 943 users and the MovieLens 1M dataset included 1000209 scores for 3706 movies by 6040 users, the Root Mean Square Error (RMSE) for the evaluation index was calculated as follows, the smaller this value, the better the recommendation system.
Wherein r is u,i Andrepresenting the original and reconstructed user u scores for movie i, respectively, |testset| represents the entire test set.
To demonstrate the performance of the test results, non-Negative Matrix Factorization (NMF), probability Matrix Factorization (PMF), modified version singular value decomposition (svd++) and recommendation system based on multi-layer perceptron (NCF) were chosen as a comparison, the predicted results are shown in table 1, and from table 1, the predicted result root error (RMSE) index of the present invention over the three data sets is superior to other methods.
TABLE 1 experimental results of RMSE indicators
According to the invention, the knowledge graph can be used for carrying out feature expansion on commodities, then the self-encoder is used for carrying out dimension reduction on the obtained sparse feature information, and efficient feature representation is extracted, so that the problems of lack of auxiliary information and sparseness faced by recommendation are solved, and more accurate recommendation is carried out for users.
The invention is not limited to the above embodiments, and based on the technical solution disclosed in the invention, a person skilled in the art may make some substitutions and modifications to some technical features thereof without creative effort according to the technical content disclosed, and all the substitutions and modifications are within the protection scope of the invention.
Claims (3)
1. A film recommendation method combining knowledge graph with self-encoder is characterized by comprising the following steps:
step 1, obtaining film language categories of all films from a knowledge graph DBpedia as additional information;
step 2, vectorizing the obtained movie language category information by using a multi-hot method to be used as an initial feature extension;
step 3, using a self-encoder to reduce the dimension of the obtained initial feature expansion, and taking the output of the encoding layer of the self-encoder as the extracted low-dimension feature representation;
step 3.1, training the self-encoder model through the formula (1) and the formula (2), and reserving model parameters:
ξ l =f(L I ·Q l +p l ) (1)
L′ I =g(ξ l ·Q′ l +p′ l ) (2)
wherein L is I ∈R m×K Is language class vector corresponding to m movies, Q l ∈R K×h ,Q′ l ∈R h×K Is the weight matrix of the self-encoder, p l ∈R h ,p′ l ∈R K Is the bias vector, h is the dimension of the hidden layer of the self-encoder, and in order to avoid model overfitting, use l 2 The norms regularize the model parameters, and the objective function of the training self-encoder is as follows:
wherein the method comprises the steps ofIndicating that only samples with film language type characteristics are considered in the training process;
step 3.2 after model training is completed, L can be obtained by equation (4) I Is represented by low-dimensional features of E I Representing the low latitude characteristic representation, equation (4) is as follows:
E I =f(L I ·Q l +p l ) (4);
step 4, fusing the obtained low-dimensional characteristic representation into an original characteristic space of the film, and inputting new characteristics into a semi-self-encoder model as additional information to realize more accurate recommendation;
step 4.1 after the feature expansion is obtained in step 3.2, the feature expansion matrix E is used I ∈R m×h Partial observation scoring matrix r for all movies I ∈R m×n Additional information A in the data set I ∈R m×y Splicing, y is the dimension of the additional information, denoted cat (E I ,r I ,A I )∈R m×(h+n+y) ,
Step 4.2 the cat (E) obtained in step 4.1 I ,r I ,A I ) Input as input data into the semi-self-encoder model is achieved by equation (6):
r ′I =f(g(cat(E I ,r I ,A I )·Q+b)·Q ′ +b 1 ) (6)
wherein Q εR (h+n+y)×H ,Q ′ ∈R H×n Is a weight matrix, b.epsilon.R H ,b 1 ∈R n Is an offset vector, while the hidden layer dimension of the semi-self encoder is set to H, while in order to avoid model overfitting, use l 2 The norms are regularized to parameters, and the final objective function is shown in formula (7):
step 4.3 finally obtaining a final predicted scoring matrix by equation (8):
R ′ =f(g(cat(E I ,r I ,A I )·Q+b)·Q ′ +b 1 ) (8)。
2. the method for recommending movies by combining knowledge maps with a self-encoder according to claim 1, wherein the step 1 specifically comprises:
step 1.1, using an api interface of DBpedia to search information of films in a data set, wherein Queryclass in the interface is set as film, and QueryString is set as film name;
and step 1.2, screening the searched webpage results by using a regularization formula "< Label >" (a-zA-Z) and extracting the required film language category information.
3. The method for recommending movies by combining knowledge maps with a self-encoder according to claim 1, wherein the step 2 specifically comprises: one movie can correspond to multiple language categories, and language category information obtained by each movie is vectorized by using a multi-hot method to obtain a language category vector l corresponding to movie i i ∈R K The dimension K of the vector is the total number of the movie language categories extracted in the step 1, and the result is used as an initial feature extension.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011470189.2A CN112528074B (en) | 2020-12-14 | 2020-12-14 | Movie recommendation method combining knowledge graph with self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011470189.2A CN112528074B (en) | 2020-12-14 | 2020-12-14 | Movie recommendation method combining knowledge graph with self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112528074A CN112528074A (en) | 2021-03-19 |
CN112528074B true CN112528074B (en) | 2023-06-16 |
Family
ID=74999697
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011470189.2A Active CN112528074B (en) | 2020-12-14 | 2020-12-14 | Movie recommendation method combining knowledge graph with self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112528074B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599165A (en) * | 2016-12-08 | 2017-04-26 | 腾讯科技(深圳)有限公司 | Playing behavior-based content recommendation method and server |
CN106933996A (en) * | 2017-02-28 | 2017-07-07 | 广州大学 | A kind of recommendation method of use depth characteristic matching |
CN108228232A (en) * | 2018-01-12 | 2018-06-29 | 扬州大学 | A kind of self-repairing method for circulatory problems in program |
CN108874960A (en) * | 2018-06-06 | 2018-11-23 | 电子科技大学 | Curriculum video proposed algorithm based on noise reduction self-encoding encoder mixed model in a kind of on-line study |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8209080B2 (en) * | 2009-04-27 | 2012-06-26 | Toyota Motor Engineering & Manufacturing North America, Inc. | System for determining most probable cause of a problem in a plant |
US10223838B2 (en) * | 2013-03-15 | 2019-03-05 | Derek A. Devries | Method and system of mobile-device control with a plurality of fixed-gradient focused digital cameras |
-
2020
- 2020-12-14 CN CN202011470189.2A patent/CN112528074B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599165A (en) * | 2016-12-08 | 2017-04-26 | 腾讯科技(深圳)有限公司 | Playing behavior-based content recommendation method and server |
CN106933996A (en) * | 2017-02-28 | 2017-07-07 | 广州大学 | A kind of recommendation method of use depth characteristic matching |
CN108228232A (en) * | 2018-01-12 | 2018-06-29 | 扬州大学 | A kind of self-repairing method for circulatory problems in program |
CN108874960A (en) * | 2018-06-06 | 2018-11-23 | 电子科技大学 | Curriculum video proposed algorithm based on noise reduction self-encoding encoder mixed model in a kind of on-line study |
Non-Patent Citations (5)
Title |
---|
Computing recommendations via a Knowledge Graph-aware Autoencoder;Vito Bellini等;《arXiv》;全文 * |
Expanded autoencoder recommendation framework and its application in movie recommendation;Baolin Yi等;《IEEE》;全文 * |
Feedback Recurrent Autoencoder;Yang Yang等;《IEEE》;全文 * |
Hybrid Collaborative Recommendation via Dual-Autoencoder;BINGBING DONG等;《IEEE》;全文 * |
个性化推荐系统综述;王国霞,刘贺平;《计算机工程与应用》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112528074A (en) | 2021-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110298037B (en) | Convolutional neural network matching text recognition method based on enhanced attention mechanism | |
CN109740148B (en) | Text emotion analysis method combining BiLSTM with Attention mechanism | |
CN109271522B (en) | Comment emotion classification method and system based on deep hybrid model transfer learning | |
CN113535984B (en) | Knowledge graph relation prediction method and device based on attention mechanism | |
US20210191990A1 (en) | Efficient cross-modal retrieval via deep binary hashing and quantization | |
Dashtipour et al. | Exploiting deep learning for Persian sentiment analysis | |
Cheng et al. | UnitedQA: A hybrid approach for open domain question answering | |
CN112256866B (en) | Text fine-grained emotion analysis algorithm based on deep learning | |
CN112733550A (en) | Knowledge distillation-based language model training method, text classification method and device | |
CN116304066B (en) | Heterogeneous information network node classification method based on prompt learning | |
JP2023022845A (en) | Method of processing video, method of querying video, method of training model, device, electronic apparatus, storage medium and computer program | |
CN110598207A (en) | Word vector obtaining method and device and storage medium | |
CN111582506A (en) | Multi-label learning method based on global and local label relation | |
CN112256965A (en) | Neural collaborative filtering model recommendation method based on lambdamat | |
CN111079011A (en) | Deep learning-based information recommendation method | |
CN109117471B (en) | Word relevancy calculation method and terminal | |
CN112528074B (en) | Movie recommendation method combining knowledge graph with self-encoder | |
CN116977701A (en) | Video classification model training method, video classification method and device | |
CN111914084A (en) | Deep learning-based emotion label text generation and evaluation system | |
Ghazi Zahedi et al. | A deep extraction model for an unseen keyphrase detection | |
CN114637846A (en) | Video data processing method, video data processing device, computer equipment and storage medium | |
CN109508380B (en) | Method for analyzing microblog emotion by combining user structure similarity | |
CN114692610A (en) | Keyword determination method and device | |
CN116562275B (en) | Automatic text summarization method combined with entity attribute diagram | |
CN113011186B (en) | Named entity recognition method, named entity recognition device, named entity recognition equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |