CN117474498A - Automatic reminding system and method for patent annual fee - Google Patents

Automatic reminding system and method for patent annual fee Download PDF

Info

Publication number
CN117474498A
CN117474498A CN202311565644.0A CN202311565644A CN117474498A CN 117474498 A CN117474498 A CN 117474498A CN 202311565644 A CN202311565644 A CN 202311565644A CN 117474498 A CN117474498 A CN 117474498A
Authority
CN
China
Prior art keywords
encoder
feature
evaluated
text
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311565644.0A
Other languages
Chinese (zh)
Inventor
尹考丽
吴建
邱芹芹
方翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Weicheng Intellectual Property Agency Co ltd
Original Assignee
Hefei Weicheng Intellectual Property Agency Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Weicheng Intellectual Property Agency Co ltd filed Critical Hefei Weicheng Intellectual Property Agency Co ltd
Priority to CN202311565644.0A priority Critical patent/CN117474498A/en
Publication of CN117474498A publication Critical patent/CN117474498A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services
    • G06Q50/184Intellectual property management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The patent annuity maintenance reminding system and method based on the deep learning neural network model digs out the association relation between the semantic understanding characteristics of the text content of the to-be-estimated patent and the semantic understanding characteristics of the enterprise sales products, so that the importance degree of the to-be-estimated patent is detected and estimated, and the patent annuity maintenance reminding is generated for the patent with higher importance degree. Therefore, the manual auditing time and energy in the patent management flow can be greatly saved, and the maintenance cost and risk are reduced.

Description

Automatic reminding system and method for patent annual fee
Technical Field
The application relates to the field of intelligent prompting, and more particularly, to an automatic patent annual fee prompting system and a method thereof.
Background
With the development of economies and the increase in market competition, businesses need to gain commercial competitive advantage through intellectual property protection. The patent is taken as an important intellectual property form and has an irreplaceable effect on the innovative development of enterprises. The annual patent fee is an important fee for maintaining patent rights, and enterprises need to pay high fees during patent application, and also need to pay the annual patent fee periodically, however, for patents which have no commercial value, the maintenance cost is certainly a burden.
Thus, an optimized patent annual fee automatic reminding scheme is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an automatic patent annuity reminding system and a method thereof, which are used for detecting and evaluating the importance degree of a patent to be evaluated by adopting a neural network model based on deep learning to dig out the association relation between the semantic understanding characteristics of the text content of the patent to be evaluated and the semantic understanding characteristics of the sales products of enterprises, so that the patent annuity maintenance reminding is generated for the patent with higher importance degree. Therefore, the manual auditing time and energy in the patent management flow can be greatly saved, and the maintenance cost and risk are reduced.
According to one aspect of the present application, there is provided an automatic patent annuity reminding method, including:
acquiring text content of a patent to be evaluated;
acquiring images and text descriptions of products sold by enterprises;
the image and text description of the enterprise sales product are processed through a cross-mode joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-mode feature matrix;
word segmentation is carried out on the text content of the patent to be evaluated, and then a context encoder comprising an embedded layer is used for obtaining semantic understanding feature vectors of the patent to be evaluated;
Performing association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector to obtain a classification feature vector;
the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for representing an importance grade label of the patent to be evaluated; and
and generating a patent annual fee maintenance reminder based on the classification result.
According to another aspect of the present application, there is provided an automatic patent annuity reminding system, comprising:
the patent text content acquisition module is used for acquiring text content of the patent to be evaluated;
the enterprise sales product information acquisition module is used for acquiring images and text descriptions of enterprise sales products;
the cross-mode joint coding module is used for enabling the images and text descriptions of the enterprise sales products to pass through a cross-mode joint coder comprising a text coder and an image coder to obtain an enterprise product multi-mode feature matrix;
the context coding module is used for obtaining semantic understanding feature vectors of the to-be-evaluated patents through a context coder comprising an embedded layer after word segmentation processing is carried out on the text content of the to-be-evaluated patents;
the association coding module is used for carrying out association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector so as to obtain a classification feature vector;
The classification result generation module is used for passing the classification feature vector through a classifier to obtain a classification result, wherein the classification result is used for representing an importance grade label of the patent to be evaluated; and
and the reminding module is used for generating a patent annual fee maintenance reminder based on the classification result.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the patent annuity automatic alert method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the patent annuity automatic reminding method as described above.
Compared with the prior art, the patent annuity automatic reminding system and the method thereof provided by the application have the advantages that the correlation between the semantic understanding characteristics of the text content of the to-be-assessed patent and the semantic understanding characteristics of the enterprise sales products is mined by adopting the neural network model based on deep learning, so that the importance degree of the to-be-assessed patent is detected and assessed, and the patent annuity maintenance reminding is generated for the patent with higher importance degree. Therefore, the manual auditing time and energy in the patent management flow can be greatly saved, and the maintenance cost and risk are reduced.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a schematic view of a scenario of an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 2 is a flow chart of an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 3 is a flow chart of a training phase in an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 4 is a system architecture diagram of an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 5 is a system architecture diagram of a training phase in an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 6 is a flow chart of cross-modal joint coding in an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 7 is a flow chart of context encoding in an automatic patent annuity reminding method according to an embodiment of the application;
FIG. 8 is a block diagram of an automatic patent annuity reminder system in accordance with an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, annual patent fee is an important fee for maintaining patent rights, and enterprises are required to pay a high fee during application of patents, and also to pay annual patent fee periodically, however, maintenance cost is certainly a burden for those patents which have no commercial value. Thus, an optimized patent annual fee automatic reminding scheme is desired.
Accordingly, in consideration of the fact that the patent annuity maintenance examination process is actually carried out, the key is that semantic understanding is carried out on the content of the patent and the content of products sold by enterprises so as to see whether the degree of relevance of the content of the patent and the content of products sold by enterprises meets the requirements, so that whether the patent still has commercial value or not is judged, and accordingly, a patent annuity maintenance reminder is generated. Therefore, in the technical scheme of the application, an intelligent judging method based on the patent content and the enterprise sales product information is provided, so that whether the patent needs to be maintained or not is judged based on the patent content and the enterprise sales product picture and text description, valuable patents needing to be maintained are screened out, and enterprises are automatically reminded of maintaining annual patent fees. However, since semantic information about patent terms exists in the text content of the patent to be evaluated, and image semantics and text semantic information about products exist in the image and text description of the products sold by the enterprise, it is difficult to perform semantic association feature expression in a conventional manner. Therefore, in the process, the difficulty is how to mine the associated feature distribution information between the semantic understanding features of the text content of the to-be-evaluated patent and the semantic understanding features of the enterprise sales product, so as to detect and evaluate the importance degree of the to-be-evaluated patent, and generate a patent annual fee maintenance reminder for the patent with higher importance degree. Therefore, the manual auditing time and energy in the patent management flow can be greatly saved, and the maintenance cost and risk are reduced.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. Deep learning and development of neural networks provide new solutions and solutions for mining associative feature distribution information between semantic understanding features of text content of the patent to be evaluated and semantic understanding features of products sold by the enterprise.
Specifically, in the technical scheme of the application, firstly, text content of a patent to be evaluated is obtained, and an image and text description of a product sold by an enterprise are obtained. Next, for semantic feature extraction of the enterprise sales product, a cross-modal joint encoder comprising a text encoder and an image encoder is used to process the image and text description of the enterprise sales product to obtain an enterprise product multi-modal feature matrix. Specifically, the cross-modal joint encoder is a cross-modal joint encoder based on a Clip model, and comprises a text encoder, an image encoder and a related encoding optimizer, specifically, the text encoder is used for carrying out semantic understanding on the text description of the enterprise sales product, so as to extract the context-based semantic related characteristic information about the product in the text description of the enterprise sales product; the image encoder is used for carrying out image semantic feature mining on the image of the product sold by the enterprise, so that implicit feature semantic information about the product in the image is extracted; and then, carrying out associated coding by utilizing the associated coding optimizer to carry out text semantic understanding characteristics and image semantic distribution characteristics of the enterprise sales products, so as to carry out coding optimization of image attributes on the product image semantic characteristics based on the text semantic characteristics of the enterprise sales products to obtain the enterprise product multi-modal feature matrix. In this way, the obtained multi-modal feature matrix of the enterprise product not only contains text description semantic feature content of the enterprise sales product, but also reflects implicit association features in the image, thereby improving semantic expression of the enterprise sales product.
The text content of the patent under evaluation then reflects the essence of what the patent expresses. Consider that the text content of the patent under evaluation is composed of a plurality of words, and that each word has a semantic association feature of context between them. Therefore, in order to enable semantic understanding of the text content of the to-be-evaluated patent, in the technical scheme of the application, word segmentation processing is performed on the text content of the to-be-evaluated patent to avoid word sequence confusion, and then encoding is performed in a context encoder comprising an embedded layer, so that global context semantic association characteristic information is extracted from the text content of the to-be-evaluated patent, and a to-be-evaluated patent semantic understanding characteristic vector is obtained. In particular, here, since there are more patent terms in the text content of the to-be-evaluated patent, in order to improve the accuracy of semantic understanding of the text content of the to-be-evaluated patent, the embedding layer may be configured by using a knowledge graph of the semantic features of the terms of the to-be-evaluated patent, so that prior information of the semantic features of the terms of the to-be-evaluated patent is introduced in the process of semantic understanding.
And further, carrying out association coding on the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated to obtain a classification feature vector, so as to represent association feature distribution information between high-dimensional implicit feature information of the enterprise sales product and text semantic feature information of the patent to be evaluated, and taking the association feature distribution information as the classification feature vector. And then, further classifying the classifying feature vectors through a classifier to obtain a classifying result used for representing the importance level label of the patent to be evaluated. That is, in the technical solution of the present application, the label of the classifier is an importance level label of the patent to be evaluated. Therefore, after the classification result is obtained, the importance degree of the to-be-evaluated patent can be detected and evaluated based on the classification result, so that a patent annuity maintenance reminder is generated for the patent with the importance degree exceeding the preset standard based on the classification result.
In particular, in the technical solution of the present application, considering that the multi-modal feature matrix of the enterprise product includes text coding features of text descriptions and image coding features of images of the enterprise sales product, and the semantic understanding feature vector of the patent to be evaluated includes contextual text semantic association features of text contents of the patent to be evaluated, when the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated are further associated and coded to obtain the classification feature vector, the classification feature vector may include high-dimensional images and text semantic features of different modalities and semantic features of different scales and associated features thereof, and therefore, it is desirable to promote global feature association of the classification feature vector.
Therefore, the applicant of the present application multiplies the classification feature vector with its own transpose to obtain an association feature matrix M, and considers that the association feature matrix M can express the position-by-position association of the feature value granularity of the classification feature vector, so if manifold expression of the association feature matrix M in the high-dimensional feature space can be kept consistent in the full-space association dimension and the feature value-by-feature value association dimension, the global feature association expression effect of the association feature matrix M can also be improved.
Based on this, the manifold convex decomposition consistency factor of the feature matrix is introduced as a loss function for the associated feature matrix M, specifically expressed as:
V c =(m 1,1 ,m 2,2 ,...,)
wherein V is r And V c Respectively matrix m i,j The E M corresponds to the mean vector and diagonal vector of the row vector, |. || 1 A norm of the vector is represented, I.I F The Frobenius norm of the matrix is represented, L is the length of the eigenvector, and w 1 、w 2 And w 3 Is a weight super parameter.
That is, considering that the row or column dimension of the association feature matrix M expresses the relevance of each feature value of the classification feature vector to the feature vector as a whole, and the diagonal dimension expresses the self-relevance of each feature value of the classification feature vector, the manifold convex decomposition consistency factor keeps consistency in the sub-dimension represented by the row direction and the diagonal direction for the distribution relevance of the association feature matrix M in the sub-dimension represented by the row direction and the diagonal direction, by flattening the set of finite convex polynomials of the feature manifold represented by the association feature matrix M, and restricting the geometric convex decomposition in the form of the shape weight of the sub-dimension association, thereby promoting the consistency of the convex geometric representation of the feature manifold of the association feature matrix M in the resolvable dimension represented by the row and the diagonal direction, so that the manifold representation of the association feature matrix M in the high-dimensional feature space keeps consistency in the space full-association dimension and the feature value association dimension, and when the model training is reversely transferred through the association feature matrix M, the global feature classification effect on the association feature vector is obtained by the association feature matrix.
Based on this, the application provides an automatic patent annual fee reminding method, which comprises the following steps: acquiring text content of a patent to be evaluated; acquiring images and text descriptions of products sold by enterprises; the image and text description of the enterprise sales product are processed through a cross-mode joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-mode feature matrix; word segmentation is carried out on the text content of the patent to be evaluated, and then a context encoder comprising an embedded layer is used for obtaining semantic understanding feature vectors of the patent to be evaluated; performing association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector to obtain a classification feature vector; the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for representing an importance grade label of the patent to be evaluated; and generating a patent annual fee maintenance reminder based on the classification result.
Fig. 1 is a schematic view of a scenario of an automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 1, in this application scenario, an image of an enterprise sales product is acquired by a camera (e.g., C as illustrated in fig. 1); acquiring text content of a patent to be evaluated; and obtaining a text description of the products sold by the enterprise. Next, the above information is input to a server (e.g., S in fig. 1) in which an automatic patent annuity reminding algorithm is deployed, wherein the server is capable of processing the above input information with the automatic patent annuity reminding algorithm to generate a classification result for representing an importance level tag of a patent to be evaluated.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary method
Fig. 2 is a flowchart of an automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 2, the patent annual fee automatic reminding method according to the embodiment of the application includes: s110, acquiring text content of a patent to be evaluated; s120, acquiring images and text descriptions of products sold by enterprises; s130, enabling the image and text description of the enterprise sales product to pass through a cross-mode joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-mode feature matrix; s140, performing word segmentation on the text content of the patent to be evaluated, and then obtaining semantic understanding feature vectors of the patent to be evaluated through a context encoder comprising an embedded layer; s150, performing association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector to obtain a classification feature vector; s160, the classification feature vectors pass through a classifier to obtain classification results, wherein the classification results are used for representing importance grade labels of the patents to be evaluated; and S170, generating a patent annual fee maintenance reminder based on the classification result.
Fig. 4 is a system architecture diagram of an automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 4, in the network architecture, first, text content of a patent to be evaluated is acquired; acquiring images and text descriptions of products sold by enterprises; then, the image and text description of the enterprise sales product pass through a cross-mode joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-mode feature matrix; word segmentation is carried out on the text content of the patent to be evaluated, and then a context encoder comprising an embedded layer is used for obtaining semantic understanding feature vectors of the patent to be evaluated; then, carrying out association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector to obtain a classification feature vector; furthermore, the classification feature vector is passed through a classifier to obtain a classification result, wherein the classification result is used for representing an importance grade label of the patent to be evaluated; and generating a patent annual fee maintenance reminder based on the classification result.
Specifically, in step S110 and step S120, the text content of the patent to be evaluated is acquired; and acquiring an image and a text description of the product sold by the enterprise. Considering that in the process of actually carrying out patent annuity maintenance examination, the key is to carry out semantic understanding on the content of the patent and the content of the product sold by enterprises so as to see whether the degree of relevance of the content and the product sold by enterprises meets the requirement or not, thereby judging whether the patent still has commercial value and correspondingly generating a patent annuity maintenance reminder. Therefore, in the technical scheme of the application, whether the patent needs to be maintained or not can be judged based on the patent content and the pictures and text descriptions of the sales products of enterprises, so that valuable patents needing to be maintained can be screened out, and the enterprises can be automatically reminded of maintaining annual patent fee. Specifically, firstly, acquiring an image of an enterprise sales product through a camera; and acquiring text contents of the to-be-evaluated patent and text description of the product sold by the enterprise.
Specifically, in step S130, the image and text description of the product sold by the enterprise are passed through a cross-modal joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-modal feature matrix. That is, for semantic feature extraction of the enterprise sales product, a cross-modal joint encoder comprising a text encoder and an image encoder is used to process the image and text description of the enterprise sales product to obtain an enterprise product multi-modal feature matrix. The cross-mode joint encoder is a cross-mode joint encoder based on a Clip model, and comprises a text encoder, an image encoder and a related encoding optimizer, particularly, in the technical scheme of the application, the text encoder is a context encoder comprising an embedded layer, the image encoder is a convolutional neural network model serving as a filter, and particularly, the context encoder comprising the embedded layer is used for carrying out semantic understanding on a text description of an enterprise sales product, so that semantic related characteristic information about the context of the product in the text description of the enterprise sales product is extracted; the convolutional neural network model is used for carrying out image semantic feature mining on images of products sold by the enterprises, so that implicit feature semantic information about the products in the images is extracted; and then, carrying out associated coding by utilizing the associated coding optimizer to carry out text semantic understanding characteristics and image semantic distribution characteristics of the enterprise sales products, so as to carry out coding optimization of image attributes on the product image semantic characteristics based on the text semantic characteristics of the enterprise sales products to obtain the enterprise product multi-modal feature matrix. In this way, the obtained multi-modal feature matrix of the enterprise product not only contains text description semantic feature content of the enterprise sales product, but also reflects implicit association features in the image, thereby improving semantic expression of the enterprise sales product.
Fig. 6 is a flowchart of cross-mode joint coding in an automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 6, in the cross-modal joint coding process, the method includes: s210, enabling the image of the enterprise sales product to pass through an image encoder of a cross-mode joint encoder to obtain an image feature vector; s220, enabling the text description of the enterprise sales products to pass through a text encoder of a cross-mode joint encoder to obtain sequence feature vectors; and S230, carrying out joint coding on the image feature vector and the sequence feature vector to obtain the enterprise product multi-mode feature matrix. Wherein, the S210 includes: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the convolutional neural network as a filter is the image feature vector, and the input of the first layer of the convolutional neural network as a filter is an image of the product sold by the enterprise. More specifically, the S220 includes: word segmentation processing is carried out on the text description of the enterprise sales product so as to convert the text description of the enterprise sales product into a word sequence composed of a plurality of words; mapping each word in the word sequence into a word embedding vector by using an embedding layer of a text encoder of the cross-mode joint encoder to obtain a sequence of word embedding vectors; performing global context semantic coding on the sequence of word embedded vectors based on a converter thought by using a converter of a text encoder of the cross-modal joint encoder to obtain a plurality of global context semantic feature vectors; and cascading the plurality of global context semantic feature vectors to obtain the sequence feature vector.
Specifically, in step S140, after word segmentation is performed on the text content of the to-be-evaluated patent, a context encoder including an embedded layer is used to obtain a semantic understanding feature vector of the to-be-evaluated patent. It is contemplated that the textual content of the patent under evaluation is a sequence of a plurality of words, and that each word has a contextual semantic association feature between the words. Therefore, in the technical scheme of the application, after word segmentation is performed on the text content of the to-be-evaluated patent, the text content of the to-be-evaluated patent is encoded by a context encoder comprising an embedded layer, so that global context semantic association characteristic information is extracted from the text content of the to-be-evaluated patent, and a semantic understanding characteristic vector of the to-be-evaluated patent is obtained.
FIG. 7 is a flow chart of context encoding in an automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 7, in the context encoding process, it includes: s310, word segmentation processing is carried out on the text content of the to-be-evaluated patent so as to convert the text content of the to-be-evaluated patent into a word sequence composed of a plurality of words; s320, mapping each word in the word sequence into a word embedding vector by using an embedding layer of the context encoder comprising the embedding layer to obtain a word embedding vector sequence; s330, performing global context semantic coding on the sequence of word embedded vectors by using a converter of the context encoder comprising an embedded layer based on a converter thought to obtain a plurality of global context semantic feature vectors; and S340, cascading the plurality of global context semantic feature vectors to obtain the to-be-evaluated patent semantic understanding feature vector. Here, the S330 includes: one-dimensional arrangement is carried out on the sequence of the word embedding vectors to obtain global feature vectors; calculating the product between the global feature vector and the transpose vector of each word embedding vector in the sequence of word embedding vectors to obtain a plurality of self-attention association matrices; respectively carrying out standardization processing on each self-attention correlation matrix in the plurality of self-attention correlation matrices to obtain a plurality of standardized self-attention correlation matrices; obtaining a plurality of probability values by using a Softmax classification function through each normalized self-attention correlation matrix in the normalized self-attention correlation matrices; weighting each word embedding vector in the sequence of word embedding vectors by taking each probability value in the plurality of probability values as a weight so as to obtain the plurality of context semantic feature vectors; and cascading the plurality of context semantic feature vectors to obtain the plurality of global context semantic feature vectors.
Specifically, in step S150, the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated are subjected to association coding to obtain a classification feature vector. That is, after the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated are obtained, the multi-modal feature matrix and the semantic understanding feature vector of the patent to be evaluated are further associated and encoded to represent the associated feature distribution information between the high-dimensional implicit feature information of the sales product of the enterprise and the text semantic feature information of the patent to be evaluated, so as to obtain the classification feature vector. In a specific example of the application, the enterprise product multi-modal feature matrix and the to-be-evaluated patent semantic understanding feature vector are subjected to associated coding according to the following formula to obtain the classification feature vector; wherein, the formula is:wherein M represents the multi-modal feature matrix of the enterprise product, M T A transpose vector representing the temperature input vector, V n Representing the semantic understanding feature vector of the patent to be evaluated, V representing the classifying feature vector,>representing vector multiplication.
Specifically, in step S160 and step S170, the classification feature vector is passed through a classifier to obtain a classification result, where the classification result is used to represent an importance level label of the patent to be evaluated; and generating a patent annual fee maintenance reminder based on the classification result. That is, after the classification feature vector is obtained, it is further passed through a classifier to obtain an importance level analysis for representing the patent to be evaluated. In one example, the classifier includes a plurality of fully connected layers and a Softmax layer cascaded with a last fully connected layer of the plurality of fully connected layers. In the classification processing of the classifier, multiple full-connection encoding is carried out on the classification feature vectors by using multiple full-connection layers of the classifier to obtain encoded classification feature vectors; further, the encoded classification feature vector is input to a Softmax layer of the classifier, i.e., the encoded classification feature vector is classified using the Softmax classification function to obtain a classification label. In the technical scheme of the application, the label of the classifier is an importance grade label of the patent to be evaluated. Therefore, after the classification result is obtained, the importance degree of the to-be-evaluated patent can be detected and evaluated based on the classification result, so that a patent annuity maintenance reminder is generated for the patent with the importance degree exceeding the preset standard based on the classification result.
It should be appreciated that the cross-modal joint encoder including text encoders and image encoders, the context encoder including embedded layers, and the classifier need to be trained prior to inference using the neural network model described above. That is, the method for automatically reminding the patent annual fee further comprises a training module for training the cross-mode joint encoder comprising a text encoder and an image encoder, the context encoder comprising an embedded layer and the classifier.
Fig. 3 is a flowchart of a training phase in the automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 3, the automatic patent annuity reminding method according to the embodiment of the application further includes a training phase, including the steps of: s1110, acquiring training data, wherein the training data comprises training text contents of a patent to be evaluated, training images and training text descriptions of products sold by enterprises, and a true value of an importance level label of the patent to be evaluated; s1120, training images and training text descriptions of the enterprise sales products are passed through the cross-modal joint encoder comprising a text encoder and an image encoder to obtain a multi-modal feature matrix of the training enterprise products; s1130, performing word segmentation on the training text content of the to-be-evaluated patent, and then obtaining a training to-be-evaluated patent semantic understanding feature vector through the context encoder comprising the embedded layer; s1140, performing association coding on the multi-modal feature matrix of the training enterprise product and the training to-be-evaluated patent semantic understanding feature vector to obtain a training classification feature vector; s1150, passing the training classification feature vector through the classifier to obtain a classification loss function value; s1160, calculating a manifold convex decomposition consistency loss function of the training classification feature vector; s1170 training the cross-modal joint encoder including text encoder and image encoder, the context encoder including embedded layer and the classifier based on the weighted sum of the classification loss function value and manifold convex decomposition consistency loss function and by back propagation of gradient descent.
Fig. 5 is a system architecture diagram of a training phase in an automatic patent annuity reminding method according to an embodiment of the application. As shown in fig. 5, in the automatic patent annuity reminding method, in the training process, firstly, training data is obtained, wherein the training data comprises training text contents of a patent to be evaluated, training images and training text descriptions of products sold by enterprises, and a true value of an importance level label of the patent to be evaluated; then, training images and training text descriptions of the enterprise sales products are passed through the cross-modal joint encoder comprising a text encoder and an image encoder to obtain a training enterprise product multi-modal feature matrix; word segmentation is carried out on the training text content of the to-be-evaluated patent, and then the training text content is passed through the context encoder comprising the embedded layer to obtain a training to-be-evaluated patent semantic understanding feature vector; performing association coding on the multi-modal feature matrix of the training enterprise product and the training to-be-evaluated patent semantic understanding feature vector to obtain a training classification feature vector; then, the training classification feature vector passes through the classifier to obtain a classification loss function value; calculating a manifold convex decomposition consistency loss function of the training classification feature vector; further, the cross-modal joint encoder including text encoders and image encoders, the context encoder including embedded layers, and the classifier are trained based on a weighted sum of the classification loss function values and manifold convex decomposition consistency loss functions and by back propagation of gradient descent.
In particular, in the technical solution of the present application, considering that the multi-modal feature matrix of the enterprise product includes text coding features of text descriptions and image coding features of images of the enterprise sales product, and the semantic understanding feature vector of the patent to be evaluated includes contextual text semantic association features of text contents of the patent to be evaluated, when the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated are further associated and coded to obtain the classification feature vector, the classification feature vector may include high-dimensional images and text semantic features of different modalities and semantic features of different scales and associated features thereof, and therefore, it is desirable to promote global feature association of the classification feature vector.
Therefore, the applicant of the present application multiplies the classification feature vector with its own transpose to obtain an association feature matrix M, and considers that the association feature matrix M can express the position-by-position association of the feature value granularity of the classification feature vector, so if manifold expression of the association feature matrix M in the high-dimensional feature space can be kept consistent in the full-space association dimension and the feature value-by-feature value association dimension, the global feature association expression effect of the association feature matrix M can also be improved. Based on this, a manifold convex decomposition consistency factor of a feature matrix is introduced as a loss function for the associated feature matrix M.
In one specific example of the present application, calculating the manifold convex decomposition consistency loss function of the training classification feature vector includes: calculating position-by-position association of the training classification feature vector and a transpose vector of the training classification feature vector to obtain an association feature matrix; and calculating a manifold convex decomposition consistency factor of the correlation feature matrix in the following formula to obtain the manifold convex decomposition consistency loss function; wherein, the formula is:
V c =(m 1,1 ,m 2,2 ,...,)
wherein M represents the associated feature matrix, M i,j Characteristic values representing the ith row and the jth column of the associated characteristic matrix, V r And V c Respectively matrix m i,j The E M corresponds to the mean vector and diagonal vector of the row vector, |. || 1 A norm of the vector is represented, I.I F The Frobenius norm of the matrix, L is the length of the eigenvector, w 1 、w 2 And w 3 Is a weight superparameter, sigmoid represents an activation function,representing the manifold convex decomposition consistency loss function.
That is, considering that the row or column dimension of the association feature matrix M expresses the relevance of each feature value of the classification feature vector to the feature vector as a whole, and the diagonal dimension expresses the self-relevance of each feature value of the classification feature vector, the manifold convex decomposition consistency factor keeps consistency in the sub-dimension represented by the row direction and the diagonal direction for the distribution relevance of the association feature matrix M in the sub-dimension represented by the row direction and the diagonal direction, by flattening the set of finite convex polynomials of the feature manifold represented by the association feature matrix M, and restricting the geometric convex decomposition in the form of the shape weight of the sub-dimension association, thereby promoting the consistency of the convex geometric representation of the feature manifold of the association feature matrix M in the resolvable dimension represented by the row and the diagonal direction, so that the manifold representation of the association feature matrix M in the high-dimensional feature space keeps consistency in the space full-association dimension and the feature value association dimension, and when the model training is reversely transferred through the association feature matrix M, the global feature classification effect on the association feature vector is obtained by the association feature matrix.
In summary, the automatic patent annuity reminding method according to the embodiment of the application is clarified, and by adopting a neural network model based on deep learning, the association relationship between the semantic understanding characteristics of the text content of the patent to be assessed and the semantic understanding characteristics of the enterprise sales products is mined, so that the importance degree of the patent to be assessed is detected and assessed, and the patent annuity maintenance reminding is generated for the patent with higher importance degree. Therefore, the manual auditing time and energy in the patent management flow can be greatly saved, and the maintenance cost and risk are reduced.
Exemplary System
Fig. 8 is a block diagram of an automatic patent annuity reminder system in accordance with an embodiment of the application. As shown in fig. 8, an automatic patent annuity reminding system 300 according to an embodiment of the application includes: a patent text content collection module 310; an enterprise sales product information acquisition module 320; a cross-modality joint coding module 330; a context encoding module 340; an associated encoding module 350; a classification result generation module 360; and, a reminder module 370.
The patent text content acquisition module 310 is configured to acquire text content of a patent to be evaluated; the enterprise sales product information acquisition module 320 is configured to acquire an image and a text description of an enterprise sales product; the cross-mode joint coding module 330 is configured to pass the image and the text description of the product sold by the enterprise through a cross-mode joint coder including a text coder and an image coder to obtain a multi-mode feature matrix of the product sold by the enterprise; the context coding module 340 is configured to obtain a semantic understanding feature vector of the patent to be evaluated through a context encoder including an embedded layer after performing word segmentation processing on the text content of the patent to be evaluated; the association encoding module 350 is configured to perform association encoding on the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated to obtain a classification feature vector; the classification result generating module 360 is configured to pass the classification feature vector through a classifier to obtain a classification result, where the classification result is used to represent an importance level label of the patent to be evaluated; and the reminding module 370 is configured to generate a patent annual fee maintenance reminder based on the classification result.
In one example, in the above-mentioned patent annuity automatic reminding system 300, the cross-mode joint coding module 330 is configured to: the image of the enterprise sales product passes through an image encoder of a cross-mode joint encoder to obtain an image feature vector; the text description of the enterprise sales products passes through a text encoder of a cross-mode joint encoder to obtain sequence feature vectors; and carrying out joint coding on the image feature vector and the sequence feature vector to obtain the enterprise product multi-mode feature matrix. Wherein passing the image of the enterprise sales product through an image encoder of a cross-modal joint encoder to obtain an image feature vector comprises: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on a feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the convolutional neural network as a filter is the image feature vector, and the input of the first layer of the convolutional neural network as a filter is an image of the product sold by the enterprise. More specifically, passing the textual description of the enterprise sales product through a text encoder of a cross-modal joint encoder to obtain a sequence feature vector, comprising: word segmentation processing is carried out on the text description of the enterprise sales product so as to convert the text description of the enterprise sales product into a word sequence composed of a plurality of words; mapping each word in the word sequence into a word embedding vector by using an embedding layer of a text encoder of the cross-mode joint encoder to obtain a sequence of word embedding vectors; performing global context semantic coding on the sequence of word embedded vectors based on a converter thought by using a converter of a text encoder of the cross-modal joint encoder to obtain a plurality of global context semantic feature vectors; and cascading the plurality of global context semantic feature vectors to obtain the sequence feature vector.
In one example, in the above-described patent annuity automatic reminding system 300, the context encoding module 340 is configured to: word segmentation processing is carried out on the text content of the to-be-evaluated patent so as to convert the text content of the to-be-evaluated patent into a word sequence consisting of a plurality of words; mapping each word in the word sequence into a word embedding vector by using an embedding layer of the context encoder comprising the embedding layer to obtain a sequence of word embedding vectors; performing global context semantic coding on the sequence of word embedding vectors based on a converter thought by using a converter of the context encoder comprising an embedding layer to obtain a plurality of global context semantic feature vectors; and cascading the plurality of global context semantic feature vectors to obtain the to-be-evaluated patent semantic understanding feature vector.
In one example, in the above-mentioned patent annuity automatic reminding system 300, the association coding module 350 is configured to: performing association coding on the enterprise product multi-modal feature matrix and the to-be-evaluated patent semantic understanding feature vector by using the following formula to obtain the classification feature vector; wherein, the formula is: Wherein M represents the multi-modal feature matrix of the enterprise product, M T A transpose vector representing the temperature input vector, V n Representing the semantic understanding feature vector of the patent to be evaluated, V representing the classifying feature vector,>representing vector multiplication. />
In summary, the automatic patent annuity reminding system 300 according to the embodiment of the present application is illustrated, which uses a neural network model based on deep learning to dig out the association relationship between the semantic understanding feature of the text content of the patent to be assessed and the semantic understanding feature of the product sold by the enterprise, so as to detect and assess the importance degree of the patent to be assessed, thereby generating the annual patent fee maintenance reminding for the patent with higher importance degree. Therefore, the manual auditing time and energy in the patent management flow can be greatly saved, and the maintenance cost and risk are reduced.
As described above, the patent annual fee automatic reminding system according to the embodiment of the application can be implemented in various terminal devices. In one example, the annuity automatic reminder system 300 according to embodiments of the present application may be integrated into the terminal device as a software module and/or hardware module. For example, the patent annuity automatic reminding system 300 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the annual fee automatic reminder system 300 could equally be one of a number of hardware modules of the terminal device.
Alternatively, in another example, the patent annuity automatic reminding system 300 and the terminal device may be separate devices, and the patent annuity automatic reminding system 300 may be connected to the terminal device through a wired and/or wireless network and transmit interactive information in a contracted data format.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 9.
Fig. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to perform the functions in the automated patent annuity reminder method of the various embodiments of the present application described above and/or other desired functions. Various content, such as training classification feature vectors, may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information including the classification result and the like to the outside. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, etc. being omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in the functions of the automatic patent annuity reminder method according to the various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in the functions of the automatic patent annuity reminding method according to various embodiments of the present application described in the above-mentioned "exemplary method" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. The patent annual fee automatic reminding method is characterized by comprising the following steps of:
acquiring text content of a patent to be evaluated;
Acquiring images and text descriptions of products sold by enterprises;
the image and text description of the enterprise sales product are processed through a cross-mode joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-mode feature matrix;
word segmentation is carried out on the text content of the patent to be evaluated, and then a context encoder comprising an embedded layer is used for obtaining semantic understanding feature vectors of the patent to be evaluated;
performing association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector to obtain a classification feature vector;
the classification feature vector passes through a classifier to obtain a classification result, wherein the classification result is used for representing an importance grade label of the patent to be evaluated; and
and generating a patent annual fee maintenance reminder based on the classification result.
2. The automated patent annuity reminding method according to claim 1, wherein passing the image and text description of the product sold by the enterprise through a cross-modal joint encoder comprising a text encoder and an image encoder to obtain an enterprise product multi-modal feature matrix comprises:
the image of the enterprise sales product passes through an image encoder of a cross-mode joint encoder to obtain an image feature vector;
The text description of the enterprise sales products passes through a text encoder of a cross-mode joint encoder to obtain sequence feature vectors;
and carrying out joint coding on the image feature vector and the sequence feature vector to obtain the enterprise product multi-mode feature matrix.
3. The automated patent annuity reminding method according to claim 2, wherein passing the image of the product sold by the enterprise through an image encoder of a cross-modal joint encoder to obtain an image feature vector, comprises: each layer of the convolutional neural network model used as the filter performs the following steps on input data in forward transfer of the layer:
carrying out convolution processing on input data to obtain a convolution characteristic diagram;
pooling the convolution feature images based on a feature matrix to obtain pooled feature images; and
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
wherein the output of the last layer of the convolutional neural network as a filter is the image feature vector, and the input of the first layer of the convolutional neural network as a filter is an image of the product sold by the enterprise.
4. The automated patent annuity reminding method of claim 3, wherein passing the textual description of the product sold by the enterprise through a text encoder of a cross-modal joint encoder to obtain a sequence feature vector comprises:
Word segmentation processing is carried out on the text description of the enterprise sales product so as to convert the text description of the enterprise sales product into a word sequence composed of a plurality of words;
mapping each word in the word sequence into a word embedding vector by using an embedding layer of a text encoder of the cross-mode joint encoder to obtain a sequence of word embedding vectors;
performing global context semantic coding on the sequence of word embedded vectors based on a converter thought by using a converter of a text encoder of the cross-modal joint encoder to obtain a plurality of global context semantic feature vectors; and
cascading the plurality of global context semantic feature vectors to obtain the sequence feature vector.
5. The method for automatically reminding patent annuity according to claim 4, wherein the text content of the patent to be evaluated is processed by word segmentation and then passed through a context encoder including an embedded layer to obtain semantic understanding feature vectors of the patent to be evaluated, comprising:
word segmentation processing is carried out on the text content of the to-be-evaluated patent so as to convert the text content of the to-be-evaluated patent into a word sequence consisting of a plurality of words;
mapping each word in the word sequence into a word embedding vector by using an embedding layer of the context encoder comprising the embedding layer to obtain a sequence of word embedding vectors;
Performing global context semantic coding on the sequence of word embedding vectors based on a converter thought by using a converter of the context encoder comprising an embedding layer to obtain a plurality of global context semantic feature vectors; and
and cascading the global context semantic feature vectors to obtain the patent semantic understanding feature vector to be evaluated.
6. The method of claim 5, wherein performing association encoding on the multi-modal feature matrix of the enterprise product and the semantic understanding feature vector of the patent to be evaluated to obtain a classification feature vector comprises: performing association coding on the enterprise product multi-modal feature matrix and the to-be-evaluated patent semantic understanding feature vector by using the following formula to obtain the classification feature vector;
wherein, the formula is:
wherein M represents the multi-modal feature matrix of the enterprise product, M T A transpose vector representing the temperature input vector, V n Representing the semantic understanding feature vector of the patent to be evaluated, V representing the classifying feature vector,representing vector multiplication.
7. The automatic patent annuity reminding method according to claim 6, further comprising a training step of: training the cross-modal joint encoder including a text encoder and an image encoder, the context encoder including an embedded layer, and the classifier;
Wherein the training step comprises:
acquiring training data, wherein the training data comprises training text contents of a patent to be evaluated, training images and training text descriptions of products sold by enterprises, and a true value of an importance level label of the patent to be evaluated;
training images and training text descriptions of the enterprise sales products are passed through the cross-modal joint encoder comprising a text encoder and an image encoder to obtain a training enterprise product multi-modal feature matrix;
word segmentation is carried out on the training text content of the to-be-evaluated patent, and then the training text content is passed through the context encoder comprising the embedded layer to obtain a training to-be-evaluated patent semantic understanding feature vector;
performing association coding on the multi-modal feature matrix of the training enterprise product and the training to-be-evaluated patent semantic understanding feature vector to obtain a training classification feature vector;
passing the training classification feature vector through the classifier to obtain a classification loss function value;
calculating a manifold convex decomposition consistency loss function of the training classification feature vector;
the cross-modal joint encoder including text encoder and image encoder, the context encoder including embedded layer, and the classifier are trained based on a weighted sum of the classification loss function value and manifold convex decomposition consistency loss function and by back propagation of gradient descent.
8. The automated patent annuity reminding method of claim 7, wherein calculating a manifold convex decomposition consistency loss function of the training classification feature vector comprises:
calculating position-by-position association of the training classification feature vector and a transpose vector of the training classification feature vector to obtain an association feature matrix; and
calculating a manifold convex decomposition consistency factor of the correlation feature matrix according to the following formula to obtain a manifold convex decomposition consistency loss function;
wherein, the formula is:
V c =(m 1,1 ,m 2,2 ...,)
wherein M represents the associated feature matrix, M i,j Characteristic values representing the ith row and the jth column of the associated characteristic matrix, V r And V c Respectively matrix m i,j The E M corresponds to the mean vector and diagonal vector of the row vector, |. || 1 A norm of the vector is represented, I.I F The Frobenius norm of the matrix, L is the length of the eigenvector, w 1 、w 2 And w 3 Is a weight superparameter, sigmoid represents an activation function,representing the manifold convex decomposition consistency loss function.
9. The automated patent annuity alert method of claim 8, wherein passing the training classification feature vector through the classifier to obtain a classification loss function value comprises:
Processing the training classification feature vector using the classifier to obtain training classification results, and
and calculating a cross entropy loss function value between the training classification result and a true value of the brightness of the dimming corn light or the brightness of the brightening corn light as the classification loss function value.
10. An automatic patent annuity reminding system, comprising:
the patent text content acquisition module is used for acquiring text content of the patent to be evaluated;
the enterprise sales product information acquisition module is used for acquiring images and text descriptions of enterprise sales products;
the cross-mode joint coding module is used for enabling the images and text descriptions of the enterprise sales products to pass through a cross-mode joint coder comprising a text coder and an image coder to obtain an enterprise product multi-mode feature matrix;
the context coding module is used for obtaining semantic understanding feature vectors of the to-be-evaluated patents through a context coder comprising an embedded layer after word segmentation processing is carried out on the text content of the to-be-evaluated patents;
the association coding module is used for carrying out association coding on the enterprise product multi-mode feature matrix and the to-be-evaluated patent semantic understanding feature vector so as to obtain a classification feature vector;
The classification result generation module is used for passing the classification feature vector through a classifier to obtain a classification result, wherein the classification result is used for representing an importance grade label of the patent to be evaluated; and
and the reminding module is used for generating a patent annual fee maintenance reminder based on the classification result.
CN202311565644.0A 2023-11-22 2023-11-22 Automatic reminding system and method for patent annual fee Pending CN117474498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311565644.0A CN117474498A (en) 2023-11-22 2023-11-22 Automatic reminding system and method for patent annual fee

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311565644.0A CN117474498A (en) 2023-11-22 2023-11-22 Automatic reminding system and method for patent annual fee

Publications (1)

Publication Number Publication Date
CN117474498A true CN117474498A (en) 2024-01-30

Family

ID=89632887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311565644.0A Pending CN117474498A (en) 2023-11-22 2023-11-22 Automatic reminding system and method for patent annual fee

Country Status (1)

Country Link
CN (1) CN117474498A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037423A (en) * 2024-02-18 2024-05-14 北京佳格天地科技有限公司 Method and system for evaluating repayment willingness of farmers after agricultural loans
CN118278048A (en) * 2024-05-31 2024-07-02 福建中信网安信息科技有限公司 Cloud computing-based data asset security monitoring system and method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037423A (en) * 2024-02-18 2024-05-14 北京佳格天地科技有限公司 Method and system for evaluating repayment willingness of farmers after agricultural loans
CN118278048A (en) * 2024-05-31 2024-07-02 福建中信网安信息科技有限公司 Cloud computing-based data asset security monitoring system and method
CN118278048B (en) * 2024-05-31 2024-09-24 福建中信网安信息科技有限公司 Cloud computing-based data asset security monitoring system and method

Similar Documents

Publication Publication Date Title
CN115203380B (en) Text processing system and method based on multi-mode data fusion
US11636147B2 (en) Training neural networks to perform tag-based font recognition utilizing font classification
CN110555469B (en) Method and device for processing interactive sequence data
CN108959482B (en) Single-round dialogue data classification method and device based on deep learning and electronic equipment
US20240013005A1 (en) Method and system for identifying citations within regulatory content
CN117474498A (en) Automatic reminding system and method for patent annual fee
CN115796173A (en) Data processing method and system for supervision submission requirements
CN109471944B (en) Training method, device and readable storage medium for text classification model
CN103268317A (en) System and method for semantically annotating images
CN115860271A (en) System and method for managing art design scheme
CN116665086A (en) Teaching method and system based on intelligent analysis of learning behaviors
CN116089648B (en) File management system and method based on artificial intelligence
CN116821195B (en) Method for automatically generating application based on database
CN116285481A (en) Method and system for producing and processing paint
CN115205788A (en) Food material quality monitoring system
CN116759053A (en) Medical system prevention and control method and system based on Internet of things system
CN116993446A (en) Logistics distribution management system and method for electronic commerce
JP7005045B2 (en) Limit attack method against Naive Bayes classifier
EP4064038B1 (en) Automated generation and integration of an optimized regular expression
US20230376692A1 (en) Technical document issues scanner
CN117408596A (en) Intelligent logistics distribution management system and method thereof
CN114970775B (en) Clustering-based military industry group personnel information labeling method
CN117522449A (en) Feature fusion-based intelligent business data processing method and system
CN116304890A (en) Electronic book management method and system for intelligently identifying book category
CN115238645A (en) Asset data identification method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20240130