CN113868370A - Text recommendation method and device, electronic equipment and computer-readable storage medium - Google Patents

Text recommendation method and device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN113868370A
CN113868370A CN202110963194.5A CN202110963194A CN113868370A CN 113868370 A CN113868370 A CN 113868370A CN 202110963194 A CN202110963194 A CN 202110963194A CN 113868370 A CN113868370 A CN 113868370A
Authority
CN
China
Prior art keywords
text
information
similarity
layer
commodity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110963194.5A
Other languages
Chinese (zh)
Inventor
陈海波
罗志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyan Technology Beijing Co ltd
Original Assignee
Shenyan Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyan Technology Beijing Co ltd filed Critical Shenyan Technology Beijing Co ltd
Priority to CN202110963194.5A priority Critical patent/CN113868370A/en
Publication of CN113868370A publication Critical patent/CN113868370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a text recommendation method, a text recommendation device, an electronic device and a computer-readable storage medium, wherein the method comprises the following steps: acquiring data to be matched and a data type of the data to be matched, wherein the data type of the data to be matched is a text, an audio or an image; determining a query statement corresponding to the data to be matched based on the data type; acquiring the similarity between the query sentence and each commodity text; determining at least one commodity text as a text to be recommended based on the similarity between the query sentence and each commodity text; and sequencing all the texts to be recommended according to the similarity. The method and the device can automatically match the corresponding texts to be recommended according to the data to be matched input by the user, automatically sort all the texts to be recommended according to the similarity, have wide application range, meet the requirements of practical application and have higher intelligent level.

Description

Text recommendation method and device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of electronic commerce technologies, and in particular, to a text recommendation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Electronic commerce based on networks is gradually recognized by people and will become one of the main development directions of economic activities in the future. Since products of electronic commerce relate to multimedia information, the new generation of information retrieval is more emphasized on the intelligent, personalized, and distributed processing of retrieval. Whether a user can accurately and quickly inquire required information or not relates to whether electronic commerce can be developed healthily or not, so that the problem of information retrieval becomes a research focus and a hotspot of information workers, and an intelligent text recommendation algorithm needs to be developed urgently to meet the requirements of practical application.
Disclosure of Invention
The text recommendation method and device, the electronic device and the computer readable storage medium are provided, the corresponding texts to be recommended are automatically matched according to the data to be matched input by the user, all the texts to be recommended are automatically sequenced according to the similarity, the actual application requirements are met, and the intelligent level is high.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a text recommendation method, including: acquiring data to be matched and a data type of the data to be matched, wherein the data type of the data to be matched is a text, an audio or an image; determining a query statement corresponding to the data to be matched based on the data type; acquiring the similarity between the query sentence and each commodity text; determining at least one commodity text as a text to be recommended based on the similarity between the query sentence and each commodity text; and sequencing all the texts to be recommended according to the similarity. The technical scheme has the advantages that the corresponding query sentences are determined based on the data types of the data to be matched, the similarity between the query sentences and each commodity text is obtained, the texts to be recommended are determined from the commodity texts based on the similarity, the texts to be recommended are sequenced, therefore, the corresponding texts to be recommended can be automatically matched according to the data to be matched input by the user, all the texts to be recommended are automatically sequenced according to the similarity, the application range is wide, the actual application requirements are met, and the intelligent level is high.
In some optional embodiments, the determining, based on the data type, a query statement corresponding to the data to be matched includes: when the data type is a text, taking the data to be matched as the query statement; and when the data type is not a text, inputting the data to be matched into a text conversion model corresponding to the data type to obtain the query statement. The technical scheme has the advantages that when the data type is not a text, the data to be matched can be converted into the text data by using the text conversion model, so that the query sentence can be obtained, whether the data type of the data to be matched is the text or not can the corresponding query sentence be obtained, and the application range is wide.
In some optional embodiments, the obtaining the similarity between the query statement and each commodity text includes: and inputting the query sentence and the commodity text into a text similarity model aiming at each commodity text to obtain the similarity of the query sentence and the commodity text. The technical scheme has the advantages that the similarity between the query sentence and the commodity text can be automatically obtained by utilizing the text similarity model, the data processing efficiency is high, and the intelligent level is high.
In some optional embodiments, the text similarity model includes a similarity module, a threshold module and an output module, and the inputting the query sentence and the commodity text into the text similarity model to obtain the similarity between the query sentence and the commodity text includes: inputting the query sentence and the commodity text into the similarity module to obtain similarity information; inputting the query sentence and the commodity text into the threshold module to obtain threshold information; and inputting the similarity information and the threshold information into the output module to obtain the similarity between the query sentence and the commodity text. The text similarity model has the advantages that the text similarity model comprises a similarity module, a threshold module and an output module, the similarity module can be used for obtaining similarity information, the threshold module can be used for obtaining threshold information and controlling information circulation, the output module is used for outputting similarity of query sentences and commodity texts in combination with the similarity information and the threshold information, in conclusion, the text similarity model is superimposed with a threshold mechanism on the basis of the similarity module and controls information circulation by the threshold module, and therefore performance of the text similarity model is improved.
In some optional embodiments, the similarity module includes an embedding layer, a first batch of normalization layers, a first context layer, an attention alignment layer, a second context layer, a pooling layer, a concatenation layer, a second batch of normalization layers, and a first full-link layer, and the inputting the query sentence and the commodity text into the similarity module to obtain the similarity information includes: inputting the query sentence and the commodity text into the embedding layer to obtain embedded information; inputting the embedded information into the first batch of standardized layers to obtain a first batch of standardized information; inputting the first batch of standardized information into the first context layer to obtain a first intermediate feature of a query statement and a first intermediate feature of a commodity text; inputting the first intermediate feature of the query sentence and the first intermediate feature of the commodity text into the attention alignment layer to obtain alignment information; constructing a query statement vector by using the first intermediate feature of the query statement and the alignment information, and constructing a commodity text vector by using the first intermediate feature of the commodity text and the alignment information; inputting the query statement vector and the commodity text vector into the second context layer to obtain a query statement second intermediate feature and a commodity text second intermediate feature; inputting the second intermediate feature of the query sentence and the second intermediate feature of the commodity text into the pooling layer respectively to obtain pooling information of the query sentence and pooling information of the commodity text; inputting the query sentence pooling information and the commodity text pooling information into the splicing layer to obtain splicing information; inputting the splicing information into the second batch of standardized layers to obtain a second batch of standardized information; and inputting the second batch of standardized information into the first full-connection layer to obtain the similarity information. The technical scheme has the advantages that the similarity module comprehensively applies the analysis processing capacity of the first context layer and the second context layer and the attention mechanism of the attention alignment layer, and the effect is very strong in text matching.
In some optional embodiments, the threshold module includes a feature extraction layer, a normalization layer, a second full-link layer, and an activation layer, and the inputting the query statement and the commodity text into the threshold module to obtain the threshold information includes: inputting the query sentence and the commodity text into the feature extraction layer to obtain feature extraction information; inputting the feature extraction information into the normalization layer to obtain normalized information; inputting the standardized information into the second full-connection layer to obtain second full-connection information; and inputting the second full-connection information into the activation layer to obtain the threshold information. The technical scheme has the advantages that the threshold information can be obtained by inputting the query sentence and the commodity text into the threshold module and sequentially processing the query sentence and the commodity text through the feature extraction layer, the normalization layer, the second full-connection layer and the activation layer, and the intelligent level is high.
In some optional embodiments, the output module includes a multiplication layer, a third full-link layer, a random inactivation regularization layer, and an output layer, and the inputting the similarity information and the threshold information into the output module to obtain the similarity between the query statement and the commodity text includes: inputting the similarity information and the threshold information into the multiplication layer to obtain a multiplication result; inputting the multiplication result into the third full-connection layer to obtain third full-connection information; inputting the third full-connection information into the random inactivation regularization layer to obtain regularization information; and outputting the regularization information through the output layer to serve as the similarity of the query statement and the commodity text. The technical scheme has the advantages that the similarity information is not directly output through the output layer, but is input into the multiplication layer together with the threshold information to obtain a multiplication result, then the information processing of the third full connection layer, the random inactivation regularization layer and the output layer is carried out in sequence, finally the similarity of the commodity text is output, and the intelligent level is high.
In a second aspect, the present application provides a text recommendation apparatus, the apparatus comprising: the data acquisition module is used for acquiring data to be matched and the data type of the data to be matched, wherein the data type of the data to be matched is a text, an audio or an image; the query statement module is used for determining a query statement corresponding to the data to be matched based on the data type; the similarity obtaining module is used for obtaining the similarity between the query sentence and each commodity text; the text to be recommended module is used for determining at least one commodity text as a text to be recommended based on the similarity between the query sentence and each commodity text; and the text sequencing module is used for sequencing all the texts to be recommended according to the similarity.
In some optional embodiments, the query statement module comprises: the first statement unit is used for taking the data to be matched as the query statement when the data type is a text; and the second statement unit is used for inputting the data to be matched into a text conversion model corresponding to the data type to obtain the query statement when the data type is not a text.
In some optional embodiments, the similarity obtaining module is configured to: and inputting the query sentence and the commodity text into a text similarity model aiming at each commodity text to obtain the similarity of the query sentence and the commodity text.
In some optional embodiments, the text similarity model includes a similarity module, a threshold module, and an output module, and the similarity obtaining module includes: the similarity information unit is used for inputting the query sentence and the commodity text into the similarity module to obtain similarity information; the threshold information unit is used for inputting the query sentence and the commodity text into the threshold module to obtain threshold information; and the similarity output unit is used for inputting the similarity information and the threshold information into the output module to obtain the similarity between the query sentence and the commodity text.
In some optional embodiments, the similarity module includes an embedding layer, a first batch of normalization layers, a first context layer, an attention alignment layer, a second context layer, a pooling layer, a stitching layer, a second batch of normalization layers, and a first fully-connected layer, and the similarity information unit includes: the embedded information subunit is used for inputting the query sentence and the commodity text into the embedded layer to obtain embedded information; the first information subunit is used for inputting the embedded information into the first batch of standardized layers to obtain a first batch of standardized information; the first characteristic subunit is used for inputting the first batch of standardized information into the first context layer to obtain a first intermediate characteristic of a query statement and a first intermediate characteristic of a commodity text; the alignment information subunit is used for inputting the first intermediate feature of the query statement and the first intermediate feature of the commodity text into the attention alignment layer to obtain alignment information; the text vector sub-unit is used for constructing a query statement vector by using the first intermediate feature of the query statement and the alignment information and constructing a commodity text vector by using the first intermediate feature of the commodity text and the alignment information; the second feature subunit is used for inputting the query statement vector and the commodity text vector into the second context layer to obtain a query statement second intermediate feature and a commodity text second intermediate feature; the pooling information subunit is used for respectively inputting the second intermediate feature of the query statement and the second intermediate feature of the commodity text into the pooling layer to obtain query statement pooling information and commodity text pooling information; the splicing information subunit is used for inputting the query sentence pooling information and the commodity text pooling information into the splicing layer to obtain splicing information; the second information subunit is used for inputting the splicing information into the second batch of standardized layers to obtain a second batch of standardized information; and the similarity acquisition subunit is used for inputting the second batch of standardized information into the first full connection layer to obtain the similarity information.
In some optional embodiments, the threshold module includes a feature extraction layer, a normalization layer, a second full-link layer, and an activation layer, and the threshold information unit includes: the feature extraction subunit is used for inputting the query sentence and the commodity text into the feature extraction layer to obtain feature extraction information; the standardized information subunit is used for inputting the feature extraction information into the standardized layer to obtain standardized information; the second full-connection subunit is used for inputting the standardized information into the second full-connection layer to obtain second full-connection information; and the threshold acquisition subunit is used for inputting the second full connection information into the activation layer to obtain the threshold information.
In some optional embodiments, the output module includes a multiplication layer, a third full connection layer, a random inactivation regularization layer, and an output layer, and the similarity output unit includes: a multiplication result subunit, configured to input the similarity information and the threshold information into the multiplication layer to obtain a multiplication result; a third full-connection subunit, configured to input the multiplication result to the third full-connection layer, so as to obtain third full-connection information; the regularization subunit is configured to input the third full-connection information into the random inactivation regularization layer to obtain regularization information; and the similarity determining subunit is used for outputting the regularization information as the similarity between the query statement and the commodity text through the output layer.
In a third aspect, the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of any one of the text recommendation methods when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the text recommendation methods described above.
Drawings
The present application is further described below with reference to the drawings and examples.
Fig. 1 is a schematic flowchart of a text recommendation method provided in an embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for determining a query statement according to an embodiment of the present application;
fig. 3 is a schematic flowchart of obtaining similarity according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a text similarity model provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of obtaining similarity information according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of obtaining threshold information according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of obtaining similarity according to an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart of 10-fold cross-validation training provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a test effect provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a text recommendation device according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a query statement module according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a similarity obtaining module according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a similarity information unit according to an embodiment of the present application;
FIG. 14 is a schematic structural diagram of a threshold information unit provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of a similarity output unit according to an embodiment of the present application;
fig. 16 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a program product for implementing a text recommendation method according to an embodiment of the present application.
Detailed Description
The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.
Referring to fig. 1, an embodiment of the present application provides a text recommendation method, which includes steps S101 to S105.
Step S101: acquiring data to be matched and the data type of the data to be matched, wherein the data type of the data to be matched is a text, an audio or an image.
Step S102: and determining a query statement corresponding to the data to be matched based on the data type. The query statement may be represented by a query, for example.
Step S103: and acquiring the similarity between the query sentence and each commodity text. The article text may be represented by doc, for example, and the article text may include at least one of the following: a title of the article; the price of the commodity; a commodity classification catalog; the commodity image corresponds to the URL of the website. The title of the product can be represented by title, for example.
Step S104: and determining at least one commodity text as a text to be recommended based on the similarity between the query sentence and each commodity text.
Step S105: and sequencing all the texts to be recommended according to the similarity.
Therefore, the corresponding query sentence is determined based on the data type of the data to be matched, the similarity between the query sentence and each commodity text is obtained, the text to be recommended is determined from the commodity text based on the similarity, the text to be recommended is sequenced, the corresponding text to be recommended can be automatically matched according to the data to be matched input by the user, all the texts to be recommended are automatically sequenced according to the similarity, the application range is wide, the actual application requirements are met, and the intelligent level is high.
Referring to fig. 2, in some embodiments, the step S102 may include steps S201 to S202.
Step S201: and when the data type is a text, taking the data to be matched as the query statement.
Step S202: and when the data type is not a text, inputting the data to be matched into a text conversion model corresponding to the data type to obtain the query statement.
In a specific application, when the data type is audio, the corresponding text conversion model may be a speech conversion text model, for example, and the method for speech converting text may be the method disclosed in the prior art CN 109754808A.
When the data type is an image, the corresponding text conversion model may be an image conversion text model, for example, and the method for converting the image into the text may be implemented by using the artificial intelligence based medical image classification processing system disclosed in the prior art CN 109754808A.
Therefore, when the data type is not a text, the data to be matched can be converted into text data by using the text conversion model, so that the query sentence can be obtained, and the corresponding query sentence can be obtained no matter whether the data type of the data to be matched is a text or not, so that the application range is wide.
In some embodiments, the step S103 may include: and inputting the query sentence and the commodity text into a text similarity model aiming at each commodity text to obtain the similarity of the query sentence and the commodity text. The text similarity Model can be a Gate-ESIM Model or a Gate-Enhanced Sequential reference Model.
Therefore, the similarity between the query sentence and the commodity text can be automatically obtained by utilizing the text similarity model, the data processing efficiency is high, and the intelligent level is high.
Referring to fig. 3, in some embodiments, the text similarity model may include a similarity module, a threshold module, and an output module, and the step S103 may include steps S301 to S303. The similarity module may be ESIM, Enhanced Sequential reference Model.
Step S301: and inputting the query sentence and the commodity text into the similarity module to obtain similarity information.
Step S302: and inputting the query sentence and the commodity text into the threshold module to obtain threshold information.
Step S303: and inputting the similarity information and the threshold information into the output module to obtain the similarity between the query sentence and the commodity text.
Therefore, the text similarity model comprises a similarity module, a threshold module and an output module, the similarity module can be used for obtaining similarity information, the threshold module can be used for obtaining threshold information and controlling information circulation, and the output module is used for outputting the similarity of the query statement and the commodity text in combination with the similarity information and the threshold information.
In one specific application, the structure of the text similarity model is shown in FIG. 4.
Referring to fig. 5, in some embodiments, the similarity module may include an embedding layer, a first batch of normalization layers, a first context layer, an attention alignment layer, a second context layer, a pooling layer, a splicing layer, a second batch of normalization layers, and a first full-connection layer, and the step S301 may include steps S401 to S410. The embedded layer may be an Embedding layer, the first batch of normalization layers may be a BatchNormalization layer, the first context layer may be a Bi-LSTM layer, the attention alignment layer may be a Soft alignment layer, the second context layer may be a Bi-LSTM layer, the Pooling layers may be an Avage Pooling layer and a Max Pooling layer, the splicing layer may be a Concat layer, and the second batch of normalization layers may be a BN layer.
Step S401: and inputting the query sentence and the commodity text into the embedding layer to obtain embedded information.
Step S402: and inputting the embedded information into the first batch of standardized layers to obtain a first batch of standardized information.
Step S403: and inputting the first batch of standardized information into the first context layer to obtain a first intermediate feature of the query statement and a first intermediate feature of the commodity text.
Step S404: and inputting the first intermediate feature of the query sentence and the first intermediate feature of the commodity text into the attention alignment layer to obtain alignment information.
Step S405: and constructing a query statement vector by using the first intermediate feature of the query statement and the alignment information, and constructing a commodity text vector by using the first intermediate feature of the commodity text and the alignment information. The query statement vector may be represented by query vec, for example, and the product text vector may be represented by doc vec, for example.
Step S406: and inputting the query statement vector and the commodity text vector into the second context layer to obtain a query statement second intermediate feature and a commodity text second intermediate feature.
Step S407: and respectively inputting the second intermediate characteristic of the query statement and the second intermediate characteristic of the commodity text into the pooling layer to obtain query statement pooling information and commodity text pooling information.
Step S408: and inputting the query sentence pooling information and the commodity text pooling information into the splicing layer to obtain splicing information.
Step S409: and inputting the splicing information into the second batch of standardized layers to obtain a second batch of standardized information.
Step S410: and inputting the second batch of standardized information into the first full-connection layer to obtain the similarity information.
Therefore, the similarity module integrates the analysis processing capability of the first context layer and the second context layer and the attention mechanism of the attention alignment layer, and the effect in text matching is very strong.
Referring to fig. 6, in some embodiments, the threshold module may include a feature extraction layer, a normalization layer, a second full-link layer, and an activation layer, and the step S302 may include steps S501 to S504. Wherein, the Normalization layer can be a Normalization layer, and the active layer can be a sigmiod layer.
Step S501: and inputting the query sentence and the commodity text into the feature extraction layer to obtain feature extraction information.
Step S502: and inputting the feature extraction information into the normalization layer to obtain normalization information.
Step S503: and inputting the standardized information into the second full-connection layer to obtain second full-connection information.
Step S504: and inputting the second full-connection information into the activation layer to obtain the threshold information.
Therefore, the query sentence and the commodity text are input into the threshold module and are sequentially subjected to information processing of the feature extraction layer, the normalization layer, the second full-connection layer and the activation layer, so that threshold information can be obtained, and the intelligent level is high.
In a specific application, the feature extraction information may include query statement feature information, commodity feature information, and query statement and commodity combination feature information.
The query statement feature information may include at least one of:
inquiring the number of the sentence words;
inquiring the number of the sentence numbers;
the adjective proportion and the noun proportion of the query statement;
a semantic vector of the query statement.
The number of query term words can be represented by, for example, query word static features, and the number of query term numbers can be represented by, for example, query words static features.
The article characteristic information may include at least one of:
the number of the title words of the commodity;
the number of the commodity title numbers;
the ratio of adjectives to nouns of the titles of the goods;
hash values of the commodity categories (column 5), the commodity categories being progressively decreased up to 5 levels, the next category being a subset of the previous one;
hash values of the last column of commodity categories;
value _ counts of the last column of commodity categories;
and (5) carrying out price binning.
The number of item title words can be represented by, for example, title word static features, the number of item title numbers can be represented by, for example, title digits static features, the hash value of the item category can be represented by, for example, a cache hash, and the price bin can be represented by, for example, a price bin.
Generally speaking, one query may search two commodities with large difference of bid price, price binning can reflect the difference of the two commodities, compared with directly characterizing the price, price binning can classify two commodities with similar prices, after price binning, the difference can be weakened, and price distribution is very concentrated.
The query statement and commodity combination feature information may include at least one of:
the difference and dot product of the commodity title sentence vector and the query sentence vector;
similarity between the commodity title sentence vector and the query sentence vector, wherein parameters of the similarity between the commodity title sentence vector and the query sentence vector comprise cosine distance, Manhattan distance and Kanbera distance;
inquiring the number of the sentence words and the commodity category words;
inquiring the number of the sentence words and the commodity title words which are the same;
calculating an edit distance of the character using fuzzywuzzy (a fuzzy string matching toolkit);
calculating an edit distance of the word using fuzzywuzzy;
the difference and the ratio of the number of the commodity title words and the number of the query sentence words;
for price binning of query sentences, compared with direct price binning, binning is performed on each query sentence, so that price binning of each query sentence is not interfered with each other;
the BM25 calculates a relevance score for the query statement and the title of the good.
The cosine distance may be represented by cosine, the manhattan distance may be represented by cityblock, the canberra distance may be represented by canberra, and the price bin for the query statement may be represented by price bin & group by query.
The BM25 is an algorithm proposed based on a probabilistic search model to evaluate the relevance between search terms and documents. The main idea of BM25 is: performing morpheme analysis on the query sentence (Q) to generate morphemes (Q)i) (ii) a Then, for each commodity title (d), calculating the relevance score of each morpheme and the commodity title, and finally, carrying out weighted summation on the relevance scores of the morphemes relative to the commodity title, thereby obtaining the relevance score of the query statement and the commodity title. The general formula of the BM25 algorithm is as follows:
Figure BDA0003222998620000111
wherein, WiDenotes qiThe corresponding weight.
In a specific application, the preprocessing method for word text may include at least one of the following:
removing punctuations and separators in the sentence and converting the punctuations and separators into lowercase;
removing punctuations and separators in the sentences, extracting word stems and converting the word stems into lower case;
the special characters are not removed, and the characters are directly separated by spaces and converted into lower case.
In a specific application, the sentence vector may be generated in any one of the following manners:
word2vec (a group of relevant models used for generating word vectors) is used for training by using a preprocessed query sentence and a commodity title to generate word vectors, and then sentence vectors are generated based on the word vectors, so that although the data quantity of a word2vec total corpus is small, the word vectors obtained by training can better embody the characteristics of the data, and can cover more words;
sentence vectors are generated using a word vector model that has been pre-trained by Google.
Referring to fig. 7, in some embodiments, the output module may include a multiplication layer, a third full connection layer, a random inactivation regularization layer, and an output layer, and the step S303 may include steps S601 to S604. Wherein, the random deactivation regularization layer may be Dropout + BN layer, and the Output layer may be Output layer.
Step S601: and inputting the similarity information and the threshold information into the multiplication layer to obtain a multiplication result.
Step S602: and inputting the multiplication result into the third full-connection layer to obtain third full-connection information.
Step S603: and inputting the third full-connection information into the random inactivation regularization layer to obtain regularization information.
Step S604: and outputting the regularization information through the output layer to serve as the similarity of the query statement and the commodity text.
Therefore, the similarity information is not directly output through the output layer, but is input into the multiplication layer together with the threshold information to obtain a multiplication result, then the information processing of the third full-connection layer, the random inactivation regularization layer and the output layer is carried out in sequence, finally the similarity of the commodity text is output, and the intelligent level is high.
Typically, the results of a user's search on the e-commerce platform are ranked by dimension rather than by relevance, such as popularity, review score, price, distance, etc., which is in many differences from traditional, information-oriented searches. In such searches, documents appear in a relevant order, which many search methods take advantage of, but less research is done on non-relevant ranking orders.
In a specific application, the text similarity model may be trained using a training set, and a prediction result may be obtained using a test set. And (4) performing 10-fold cross validation on the training set, and searching to obtain a label division threshold value of each search keyword by combining multiple sub-averages. The method greatly improves the matching index of the user non-relevance search, and achieves a very good effect on the Ave-F1(Average F1-Score) index.
In a specific application, considering that the training data amount and distribution of different query statements are different, 0.5 is not necessarily used as the partition threshold of positive and negative examples, so that the positive and negative example partition threshold of the predicted probability of each query statement can be adjusted to optimize the Ave-F1.
Referring to fig. 8, in order to make the result after adjusting the threshold more stable, 10-fold cross validation may be performed on the training set, 10 models are obtained by training 10 different seeds respectively for each fold, and then the prediction results are averaged. Similarly, when predicting the test set, training 10 different seeds with the full-scale training set to obtain 10 models respectively, and then averaging the prediction results.
In searching for thresholds, the goal is to optimize the score on 10-fold cross-validation results. The thresholds found on the 10-fold cross-validation are then used in the test set. And considering that the number of some query sentences in the training set is small, the fine adjustment of the threshold value of the second classification can greatly affect the score of the verification set, and the threshold value is only adjusted for the query sentences of which the sample number is more than 120 in the training set, and the number is about 81.3 percent of the whole. This optimization method may give a large boost to the test set, although a slight overfitting may occur on the local data set.
Referring to fig. 9, from the experimental results, the LightGBM model achieved a score of 0.7521 on the Ave-F1 scale. The parameter-optimized ESIM model achieved a score of 0.7610 on the Ave-F1 scale. The effect of the text similarity model by initializing the pre-training word vector and adopting a threshold mechanism is further improved, and a score of 0.7667 is obtained on the Ave-F1 index. And (3) optimizing the Ave-F1 at a local search threshold by using a multi-seed 10-fold cross validation method, wherein the score reaches 0.7731 after the optimization, and the improvement is very obvious.
Referring to fig. 10, an embodiment of the present application provides a text recommendation device, and a specific implementation manner of the text recommendation device is consistent with the implementation manner and the achieved technical effect described in the embodiment of the text recommendation method, and a part of the contents are not repeated.
The device comprises: the data acquisition module 101 is configured to acquire data to be matched and a data type of the data, where the data type of the data to be matched is a text, an audio, or an image; the query statement module 102 is configured to determine, based on the data type, a query statement corresponding to the data to be matched; a similarity obtaining module 103, configured to obtain similarities between the query statement and each commodity text; a text to be recommended module 104, configured to determine, based on the similarity between the query statement and each of the commodity texts, at least one of the commodity texts as a text to be recommended; and the text sorting module 105 is configured to sort all the texts to be recommended according to the similarity.
Referring to FIG. 11, in some embodiments, the query statement module 102 may include: a first statement unit 201, configured to, when the data type is a text, take the data to be matched as the query statement; a second statement unit 202, configured to, when the data type is not a text, input the data to be matched into a text conversion model corresponding to the data type, so as to obtain the query statement.
In some embodiments, the similarity obtaining module 103 may be configured to: and inputting the query sentence and the commodity text into a text similarity model aiming at each commodity text to obtain the similarity of the query sentence and the commodity text.
Referring to fig. 12, in some embodiments, the text similarity model may include a similarity module, a threshold module, and an output module, and the similarity obtaining module 103 may include: a similarity information unit 301, configured to input the query statement and the commodity text into the similarity module to obtain similarity information; a threshold information unit 302, configured to input the query statement and the commodity text into the threshold module to obtain threshold information; a similarity output unit 303, configured to input the similarity information and the threshold information into the output module, so as to obtain a similarity between the query statement and the commodity text.
Referring to fig. 13, in some embodiments, the similarity module may include an embedding layer, a first batch of normalization layers, a first context layer, an attention alignment layer, a second context layer, a pooling layer, a stitching layer, a second batch of normalization layers, and a first full-connection layer, and the similarity information unit 301 may include: an embedded information subunit 401, configured to input the query statement and the commodity text into the embedded layer, so as to obtain embedded information; a first information subunit 402, configured to input the embedded information into the first batch of normalization layers to obtain a first batch of normalization information; a first feature subunit 403, configured to input the first batch of standardized information into the first context layer, so as to obtain a first intermediate feature of a query statement and a first intermediate feature of a commodity text; an alignment information subunit 404, configured to input the first intermediate feature of the query statement and the first intermediate feature of the commodity text into the attention alignment layer, so as to obtain alignment information; a text vector subunit 405, configured to construct a query statement vector by using the first intermediate feature of the query statement and the alignment information, and construct a commodity text vector by using the first intermediate feature of the commodity text and the alignment information; a second feature subunit 406, configured to input the query statement vector and the commodity text vector into the second context layer, so as to obtain a second intermediate feature of the query statement and a second intermediate feature of the commodity text; a pooling information subunit 407, configured to input the second intermediate feature of the query statement and the second intermediate feature of the commodity text into the pooling layer, respectively, to obtain query statement pooling information and commodity text pooling information; a splicing information subunit 408, configured to input the query statement pooling information and the commodity text pooling information into the splicing layer to obtain splicing information; a second information subunit 409, configured to input the splicing information into the second batch of normalization layers to obtain a second batch of normalization information; an obtaining similarity subunit 410, configured to input the second batch of standardized information into the first full connection layer, so as to obtain the similarity information.
Referring to fig. 14, in some embodiments, the threshold module may include a feature extraction layer, a normalization layer, a second fully-connected layer, and an activation layer, and the threshold information unit 302 may include: a feature extraction subunit 501, configured to input the query statement and the commodity text into the feature extraction layer, so as to obtain feature extraction information; a standardized information subunit 502, configured to input the feature extraction information into the standardized layer to obtain standardized information; a second full link subunit 503, configured to input the standardized information into the second full link layer to obtain second full link information; a threshold obtaining subunit 504, configured to input the second full connection information into the active layer, so as to obtain the threshold information.
Referring to fig. 15, in some embodiments, the output module may include a multiplication layer, a third fully-connected layer, a random inactivation regularization layer, and an output layer, and the similarity output unit 303 may include: a multiplication result subunit 601, configured to input the similarity information and the threshold information into the multiplication layer to obtain a multiplication result; a third full connection subunit 602, configured to input the multiplication result to the third full connection layer to obtain third full connection information; a regularization subunit 603, configured to input the third full connection information into the random inactivation regularization layer to obtain regularization information; a similarity determining subunit 604, configured to output, as the similarity between the query statement and the commodity text, the regularization information through the output layer.
Referring to fig. 16, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of the text recommendation method in the embodiment of the present application, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiment of the text recommendation method, and some contents are not described again.
Memory 210 may also include a utility 214 having at least one program module 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Accordingly, the processor 220 may execute the computer programs described above, and may execute the utility 214.
Bus 230 may be a local bus representing one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any other type of bus structure.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may be through input-output interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, and when the computer program is executed, the steps of the text recommendation method in the embodiment of the present application are implemented, and a specific implementation manner of the steps is consistent with the implementation manner and the achieved technical effect described in the embodiment of the text recommendation method, and some contents are not described again.
Fig. 17 shows a program product 300 for implementing the text recommendation method according to the present embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be executed on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not so limited, and in this application, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for text recommendation, the method comprising:
acquiring data to be matched and a data type of the data to be matched, wherein the data type of the data to be matched is a text, an audio or an image;
determining a query statement corresponding to the data to be matched based on the data type;
acquiring the similarity between the query sentence and each commodity text;
determining at least one commodity text as a text to be recommended based on the similarity between the query sentence and each commodity text;
and sequencing all the texts to be recommended according to the similarity.
2. The text recommendation method according to claim 1, wherein the determining, based on the data type, the query statement corresponding to the data to be matched includes:
when the data type is a text, taking the data to be matched as the query statement;
and when the data type is not a text, inputting the data to be matched into a text conversion model corresponding to the data type to obtain the query statement.
3. The text recommendation method according to claim 1, wherein the obtaining of the similarity between the query sentence and each commodity text comprises:
and inputting the query sentence and the commodity text into a text similarity model aiming at each commodity text to obtain the similarity of the query sentence and the commodity text.
4. The text recommendation method according to claim 3, wherein the text similarity model comprises a similarity module, a threshold module and an output module, and the step of inputting the query sentence and the commodity text into the text similarity model to obtain the similarity between the query sentence and the commodity text comprises:
inputting the query sentence and the commodity text into the similarity module to obtain similarity information;
inputting the query sentence and the commodity text into the threshold module to obtain threshold information;
and inputting the similarity information and the threshold information into the output module to obtain the similarity between the query sentence and the commodity text.
5. The text recommendation method according to claim 4, wherein the similarity module comprises an embedding layer, a first batch of normalization layers, a first context layer, an attention alignment layer, a second context layer, a pooling layer, a splicing layer, a second batch of normalization layers and a first full-connection layer, and the inputting the query sentence and the commodity text into the similarity module to obtain similarity information comprises:
inputting the query sentence and the commodity text into the embedding layer to obtain embedded information;
inputting the embedded information into the first batch of standardized layers to obtain a first batch of standardized information;
inputting the first batch of standardized information into the first context layer to obtain a first intermediate feature of a query statement and a first intermediate feature of a commodity text;
inputting the first intermediate feature of the query sentence and the first intermediate feature of the commodity text into the attention alignment layer to obtain alignment information;
constructing a query statement vector by using the first intermediate feature of the query statement and the alignment information, and constructing a commodity text vector by using the first intermediate feature of the commodity text and the alignment information;
inputting the query statement vector and the commodity text vector into the second context layer to obtain a query statement second intermediate feature and a commodity text second intermediate feature;
inputting the second intermediate feature of the query sentence and the second intermediate feature of the commodity text into the pooling layer respectively to obtain pooling information of the query sentence and pooling information of the commodity text;
inputting the query sentence pooling information and the commodity text pooling information into the splicing layer to obtain splicing information;
inputting the splicing information into the second batch of standardized layers to obtain a second batch of standardized information;
and inputting the second batch of standardized information into the first full-connection layer to obtain the similarity information.
6. The text recommendation method according to claim 4, wherein the threshold module comprises a feature extraction layer, a normalization layer, a second full-link layer and an activation layer, and the step of inputting the query sentence and the commodity text into the threshold module to obtain threshold information comprises:
inputting the query sentence and the commodity text into the feature extraction layer to obtain feature extraction information;
inputting the feature extraction information into the normalization layer to obtain normalized information;
inputting the standardized information into the second full-connection layer to obtain second full-connection information;
and inputting the second full-connection information into the activation layer to obtain the threshold information.
7. The text recommendation method according to claim 4, wherein the output module comprises a multiplication layer, a third full-link layer, a random inactivation regularization layer and an output layer, and the inputting the similarity information and the threshold information into the output module to obtain the similarity between the query sentence and the commodity text comprises:
inputting the similarity information and the threshold information into the multiplication layer to obtain a multiplication result;
inputting the multiplication result into the third full-connection layer to obtain third full-connection information;
inputting the third full-connection information into the random inactivation regularization layer to obtain regularization information;
and outputting the regularization information through the output layer to serve as the similarity of the query statement and the commodity text.
8. A text recommendation apparatus, characterized in that the apparatus comprises:
the data acquisition module is used for acquiring data to be matched and the data type of the data to be matched, wherein the data type of the data to be matched is a text, an audio or an image;
the query statement module is used for determining a query statement corresponding to the data to be matched based on the data type;
the similarity obtaining module is used for obtaining the similarity between the query sentence and each commodity text;
the text to be recommended module is used for determining at least one commodity text as a text to be recommended based on the similarity between the query sentence and each commodity text;
and the text sequencing module is used for sequencing all the texts to be recommended according to the similarity.
9. An electronic device, characterized in that the electronic device comprises a memory and a processor, the memory storing a computer program, the processor implementing the steps of the text recommendation method according to any one of claims 1-7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the text recommendation method according to any one of claims 1-7.
CN202110963194.5A 2021-08-20 2021-08-20 Text recommendation method and device, electronic equipment and computer-readable storage medium Pending CN113868370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110963194.5A CN113868370A (en) 2021-08-20 2021-08-20 Text recommendation method and device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110963194.5A CN113868370A (en) 2021-08-20 2021-08-20 Text recommendation method and device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113868370A true CN113868370A (en) 2021-12-31

Family

ID=78987977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110963194.5A Pending CN113868370A (en) 2021-08-20 2021-08-20 Text recommendation method and device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113868370A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109542929A (en) * 2018-11-28 2019-03-29 山东工商学院 Voice inquiry method, device and electronic equipment
CN109977292A (en) * 2019-03-21 2019-07-05 腾讯科技(深圳)有限公司 Searching method, calculates equipment and computer readable storage medium at device
CN110737687A (en) * 2019-09-06 2020-01-31 平安普惠企业管理有限公司 Data query method, device, equipment and storage medium
CN111078842A (en) * 2019-12-31 2020-04-28 北京每日优鲜电子商务有限公司 Method, device, server and storage medium for determining query result
CN111488426A (en) * 2020-04-17 2020-08-04 支付宝(杭州)信息技术有限公司 Query intention determining method and device and processing equipment
CN111581229A (en) * 2020-03-25 2020-08-25 平安科技(深圳)有限公司 SQL statement generation method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109542929A (en) * 2018-11-28 2019-03-29 山东工商学院 Voice inquiry method, device and electronic equipment
CN109977292A (en) * 2019-03-21 2019-07-05 腾讯科技(深圳)有限公司 Searching method, calculates equipment and computer readable storage medium at device
CN110737687A (en) * 2019-09-06 2020-01-31 平安普惠企业管理有限公司 Data query method, device, equipment and storage medium
CN111078842A (en) * 2019-12-31 2020-04-28 北京每日优鲜电子商务有限公司 Method, device, server and storage medium for determining query result
CN111581229A (en) * 2020-03-25 2020-08-25 平安科技(深圳)有限公司 SQL statement generation method and device, computer equipment and storage medium
CN111488426A (en) * 2020-04-17 2020-08-04 支付宝(杭州)信息技术有限公司 Query intention determining method and device and processing equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
界面新闻: "这家中国AI公司,1个月拿了两个世界第一", pages 1 - 6, Retrieved from the Internet <URL:https://www.jiemian.com/article/3409481.html> *

Similar Documents

Publication Publication Date Title
CN112131350B (en) Text label determining method, device, terminal and readable storage medium
CN109829104B (en) Semantic similarity based pseudo-correlation feedback model information retrieval method and system
CN110457708B (en) Vocabulary mining method and device based on artificial intelligence, server and storage medium
CN111753167B (en) Search processing method, device, computer equipment and medium
CN112256860A (en) Semantic retrieval method, system, equipment and storage medium for customer service conversation content
US10528662B2 (en) Automated discovery using textual analysis
US11023503B2 (en) Suggesting text in an electronic document
CN110347908B (en) Voice shopping method, device, medium and electronic equipment
WO2010014082A1 (en) Method and apparatus for relating datasets by using semantic vectors and keyword analyses
WO2018056423A1 (en) Scenario passage classifier, scenario classifier, and computer program therefor
CN111414763A (en) Semantic disambiguation method, device, equipment and storage device for sign language calculation
CN112926308B (en) Method, device, equipment, storage medium and program product for matching text
CN109977292B (en) Search method, search device, computing equipment and computer-readable storage medium
Banik et al. Gru based named entity recognition system for bangla online newspapers
CN111611452A (en) Method, system, device and storage medium for ambiguity recognition of search text
US20090327877A1 (en) System and method for disambiguating text labeling content objects
US20190197184A1 (en) Constructing content based on multi-sentence compression of source content
CN111737607B (en) Data processing method, device, electronic equipment and storage medium
CN114742062B (en) Text keyword extraction processing method and system
US20230282018A1 (en) Generating weighted contextual themes to guide unsupervised keyphrase relevance models
CN115455152A (en) Writing material recommendation method and device, electronic equipment and storage medium
CN114818727A (en) Key sentence extraction method and device
CN111368036B (en) Method and device for searching information
CN114328860A (en) Interactive consultation method and device based on multi-model matching and electronic equipment
CN114255067A (en) Data pricing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination