CN115730215A - Universal character portrait generation method and device - Google Patents

Universal character portrait generation method and device Download PDF

Info

Publication number
CN115730215A
CN115730215A CN202211489143.4A CN202211489143A CN115730215A CN 115730215 A CN115730215 A CN 115730215A CN 202211489143 A CN202211489143 A CN 202211489143A CN 115730215 A CN115730215 A CN 115730215A
Authority
CN
China
Prior art keywords
person
character
candidate
document
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211489143.4A
Other languages
Chinese (zh)
Inventor
王路路
刘佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhipu Huazhang Technology Co ltd
Original Assignee
Beijing Zhipu Huazhang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhipu Huazhang Technology Co ltd filed Critical Beijing Zhipu Huazhang Technology Co ltd
Priority to CN202211489143.4A priority Critical patent/CN115730215A/en
Publication of CN115730215A publication Critical patent/CN115730215A/en
Pending legal-status Critical Current

Links

Images

Abstract

In the general character portrait generation method, the general character portrait generation device and the storage medium, the character information to be inquired is acquired, the character information comprises the name and the mechanism of a character, the corresponding character related document is acquired based on the name and the mechanism of the character, the character related document is input into a target character portrait model, a target sequence containing the character portrait is output, and the target character portrait corresponding to the character information is generated based on the target sequence. Therefore, according to the method and the device, the corresponding target figure portrait can be accurately generated only by acquiring the name and the mechanism of the figure, different solutions are formulated without different portrait labels, the figure portrait generation efficiency is improved, repeated calculation and resource waste are avoided, and the application range is wide.

Description

Universal portrait generation method and device
Technical Field
The present application relates to the field of user portrait generation technologies, and in particular, to a method, an apparatus, and a storage medium for generating a universal portrait.
Background
Talents are an important driving force for leading development, talents are effectively mined based on needs to construct a talent service system, and therefore the problem of how to cultivate talents and retain talents is solved, wherein talents can be extracted or analyzed into most representative labels for each talent by adopting an automatic portrait engine.
In the related art, a divide and conquer method is mainly adopted for a method for mining a person portrait, i.e., different solutions are made according to different portrait labels, but the method is not a general method and is relatively limited in application range.
Disclosure of Invention
The present application provides a method, an apparatus and a storage medium for generating a universal portrait, so as to solve the above technical problems in the related art.
An embodiment of a first aspect of the present application provides a general method for generating a portrait, including:
acquiring personal information to be inquired, wherein the personal information comprises names and mechanisms of people;
obtaining a corresponding person related document based on the name and the mechanism of the person;
inputting the figure-related document into a target figure image model, and outputting a target sequence containing a figure image;
and generating a target character image corresponding to the character information based on the target sequence.
Optionally, before inputting the person-related document into the target person image model and outputting the target sequence including the person image, the method further includes:
constructing a preset figure image model;
acquiring a training data set, wherein each training data in the training data set comprises a figure related document and a real sequence of a figure portrait corresponding to the figure related document;
and training the preset character image model by using the training data set to obtain a target character image model.
Optionally, the training data set is utilized to train the preset human figure image model, so as to obtain a target human figure image model, including:
inputting the figure-related document in each training data into the preset figure image model, and outputting a prediction sequence containing the figure image;
obtaining a loss value corresponding to a loss function based on the prediction sequence and the corresponding real sequence;
and adjusting the preset character image model based on the loss value until convergence, so as to obtain a target character image model.
Optionally, the obtaining a corresponding document related to a person based on the name and the organization of the person includes:
obtaining a corresponding first candidate document set based on the name and the mechanism of the person, wherein the first candidate document set comprises a plurality of first candidate documents, and each first candidate document comprises a title and an abstract;
extracting the title and the abstract in each first candidate document as the content corresponding to each second candidate document, and generating a second candidate document set by using a plurality of second candidate documents and the index relationship of the first document set and the second document set;
and obtaining corresponding person related documents based on the name and mechanism of the person, the first candidate document set and the second candidate document set.
Optionally, the obtaining of the corresponding person-related document based on the name and the organization of the person, the first candidate document set, and the second candidate document set includes:
performing word instantiation on the name and the mechanism of the character to obtain a first word segmentation set;
performing tokenization on each second candidate document to obtain a second word set corresponding to each second candidate document;
calculating similarity scores of the first word segmentation set and each second word segmentation set, and sequencing according to the similarity scores to obtain a preset number of second candidate documents;
obtaining a corresponding preset number of first candidate documents according to the preset number of second candidate documents and the index relation;
and splicing the contents of the first candidate documents with the preset number to obtain the corresponding character related documents.
Optionally, each first candidate document further includes a url address and a text; the obtaining of the corresponding first candidate document set based on the name and the mechanism of the person comprises:
acquiring the name of the person and a search engine page corresponding to the mechanism;
analyzing the search engine record page based on the name and the mechanism of the character to obtain a plurality of search engine records, and extracting the url address, the title and the abstract of each search engine record;
accessing a webpage corresponding to each url address, and extracting content in the webpage to serve as a text corresponding to the url address;
and obtaining a corresponding first candidate document set based on url addresses, titles, abstracts and texts which are respectively corresponding to the plurality of search engine records, wherein each first candidate document corresponds to one search engine record and comprises the url address, the title, the abstract and the text which correspond to each search engine record.
An embodiment of a second aspect of the present application provides a general person representation generating device, including:
the system comprises an acquisition module, a query module and a query module, wherein the acquisition module is used for acquiring the personal information to be queried, and the personal information comprises the name and the mechanism of a person;
the processing module is used for obtaining a corresponding person related document based on the name and the mechanism of the person;
the output module is used for inputting the character-related document into a target character image model and outputting a target sequence containing a character image;
and the generating module is used for generating a target character image corresponding to the character information based on the target sequence.
A computer device according to an embodiment of the third aspect of the present application is characterized by comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method according to the first aspect is implemented.
A computer storage medium according to an embodiment of a fourth aspect of the present application, wherein the computer storage medium stores computer-executable instructions; the computer executable instructions, when executed by a processor, enable the method of the first aspect as described above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
in the general person portrait generation method, device and storage medium provided by the application, by acquiring the person information to be inquired, the person information comprises the name and mechanism of a person, a corresponding person related document is obtained based on the name and mechanism of the person, the person related document is input into a target person portrait model, a target sequence containing the person portrait is output, and a target person portrait corresponding to the person information is generated based on the target sequence. In this way, it can be seen that,
according to the method and the device, the corresponding target figure portrait can be accurately generated only by acquiring the name and the mechanism of the figure, different solutions can be formulated without different portrait labels, so that the figure portrait generation efficiency is improved, the repeated calculation and the resource waste are avoided, and the application range is wide.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flowchart illustrating a method for generating a person representation according to one embodiment of the present application;
FIG. 2 is a diagram illustrating a sequence of person images and corresponding person images according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a general person image generating apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes a general method and apparatus for generating a person portrait according to embodiments of the present application with reference to the drawings.
Example one
FIG. 1 is a flowchart illustrating a method for generating a person representation according to an embodiment of the present application, as shown in FIG. 1, which may include the following steps:
step 101, obtaining the personal information to be inquired.
Among others, in one embodiment of the present application, the personal information may include names and organizations of persons.
And 102, obtaining a corresponding person related document based on the name and the mechanism of the person.
In an embodiment of the present application, a method for obtaining a corresponding person-related document based on a name and an organization of a person may include the following steps:
step 1021, obtaining a corresponding first candidate document set based on the name and mechanism of the person.
In an embodiment of the present application, the first candidate document set may include a plurality of first candidate documents, and each of the first candidate documents may include (a title, an abstract, a url (uniform resource locator) address, and a text).
In an embodiment of the present application, the method for obtaining the corresponding first candidate document set based on the name and the mechanism of the person may include the following steps:
step 1, acquiring the name of a person and a search engine page corresponding to a mechanism.
In an embodiment of the present application, names and organizations of people may be input into a search engine, and a search engine page corresponding to the names and organizations of people may be obtained.
And 2, analyzing the search engine record page based on the name and mechanism of the person to obtain a plurality of search engine records, and extracting the url address, the title and the abstract of each search engine record.
In one embodiment of the present application, a search engine record page is parsed by xpath based on names and organizations of people to obtain a plurality of search engine records.
And 3, accessing the webpage corresponding to each url address, and extracting the content in the webpage to be used as the text corresponding to the url address.
In an embodiment of the application, the trafiltura toolkit can be used for extracting the content in the webpage as the text corresponding to the url address to remove most useless noise in the text, so that the extracted content of the text is more accurate.
And 4, recording the url addresses, the titles, the abstracts and the texts which respectively correspond to the search engines based on the plurality of search engines to obtain a corresponding first candidate document set.
Specifically, in an embodiment of the present application, each first candidate document corresponds to one search engine record, each first candidate document includes a url address, a title, an abstract and a text corresponding to each search engine record, and the first candidate documents corresponding to the plurality of search engine records respectively constitute a corresponding first candidate document set.
Step 1022, extracting the title and the abstract in each first candidate document as the content corresponding to each second candidate document, and generating a second candidate document set and the index relationship of the first document set and the second document set by using a plurality of second candidate documents.
And step 1023, obtaining corresponding person-related documents based on the name and the mechanism of the person, the first candidate document set and the second candidate document set.
In an embodiment of the present application, a method for obtaining a corresponding person-related document based on a name and an organization of a person, a first candidate document set, and a second candidate document set may include the following steps:
step a, performing word instantiation on the name and the mechanism of the character to obtain a first word segmentation set.
And b, performing word instantiation on each second candidate document to obtain a second word segmentation set corresponding to each second candidate document.
In one embodiment of the present disclosure, the steps a and b may be tokenizer-based.
And c, calculating similarity scores of the first word set and each second word set, and sequencing according to the similarity scores to obtain a preset number of second candidate documents.
In an embodiment of the present application, a BM25 search algorithm may be used to calculate a similarity score between the first word set and each of the second word sets.
In an embodiment of the application, the method for obtaining a preset number of second candidate documents by ranking according to the similarity score may include: and performing descending sorting according to the similarity scores to obtain a preset number of second candidate documents ranked at the top.
And d, obtaining a corresponding preset number of first candidate documents according to the preset number of second candidate documents and the index relation.
And e, splicing the contents of a preset number of first candidate documents to obtain corresponding character related documents.
And 103, inputting the character-related document into a target character image model, and outputting a target sequence containing the character image.
In an embodiment of the present application, before inputting a document related to a person into a model of a target person image and outputting a target sequence including the person image, the method may further include the following steps:
and step 1031, constructing a preset human and object image model.
In an embodiment of the present application, the predetermined character image model may be an MBart model including multi-language encoder-decoders. And, in one embodiment of the present application, the MBart model has a maximum input length of 4096 to avoid losing valid information for excessively long text truncation.
And, in one embodiment of the present application, constrained decoding is employed during model decoding, i.e., image structure knowledge is injected into the decoder as cues to ensure that an effective image structure sequence is generated. In one embodiment of the present application, portrait structure knowledge can be set according to a required portrait label, so that a preset portrait image model can be applied to different portrait labels.
Step 1032, a training data set is obtained, wherein each training data in the training data set comprises a character related document and a real sequence of the character portrait corresponding to the character related document.
In an embodiment of the present application, a method for obtaining a real sequence of a human figure may include: the real portraits are stored as a linear tag tree and a real sequence of portraits is represented using the linear tag tree.
Specifically, in an embodiment of the present application, the method for converting a real person portrait into a linear tag tree may include: taking the name and the mechanism of the character as a root node of the label tree; linking a plurality of portrait tags to a root node; and corresponding the portrait attribute value of the portrait label to a portrait label node as a leaf node of the node. And, in one embodiment of the present application, the tag tree obtained by the above method may be converted into a linear string structure using depth-first traversal, with "(" and ")" representing the structural pointer in the linear string structure, each portrait being directly linked to the root node for input exemplars containing multiple portrait tags, and multiple values for a class of portraits, with the values being directly linked to the portrait tag nodes. If the value of a certain tag is null, the value is replaced by "unknown". Fig. 2 is a schematic diagram of a sequence of a person image and a person image according to an embodiment of the disclosure. As shown in FIG. 2, a real sequence of person representations may be obtained by storing the real person representation as a linear tree and then converting it into a linear string structure using depth-first traversal, with "(" and ")" representing the structural pointer in the linear string structure.
And, in an embodiment of the present application, the person-related document in the training data may be obtained through the step 102.
And step 1033, training the preset character image model by using the training data set to obtain a target character image model.
In an embodiment of the application, the method for obtaining the target person image model by training the preset person image model with the training data set may include the following steps:
step one, inputting the figure related document in each training data into a preset figure image model, and outputting a prediction sequence containing figure images;
step two, obtaining a loss value corresponding to the loss function based on the prediction sequence and the corresponding real sequence;
and step three, adjusting the preset human image model based on the loss value until convergence, and obtaining the target human image model.
And 104, generating a target character image corresponding to the character information based on the target sequence.
In an embodiment of the application, the method for generating a target person representation corresponding to person information based on a target sequence may include: and extracting the image attribute value corresponding to the image tag in the target sequence, and corresponding the image attribute value to the image tag to generate a corresponding target character image. In particular, the target sequence may be converted to a target task representation as shown in FIG. 2.
And, in one embodiment of the present application, the target person representation may be a base representation of a person. In one embodiment of the present application, the base representation may include basic attribute contents of the person, such as gender, job title, mailbox, and homepage address.
In the general person portrait generation method provided by the application, the person information to be inquired is acquired, the person information comprises the name and the mechanism of a person, the corresponding person related document is obtained based on the name and the mechanism of the person, the person related document is input into the target person portrait model, the target sequence containing the person portrait is output, and the target person portrait corresponding to the person information is generated based on the target sequence. Therefore, according to the method and the device, the corresponding target figure portrait can be accurately generated only by acquiring the name and the mechanism of the figure, different solutions are formulated without different portrait labels, the figure portrait generation efficiency is improved, repeated calculation and resource waste are avoided, and the application range is wide.
Fig. 3 is a schematic structural diagram of a general person representation generating apparatus according to an embodiment of the present application, as shown in fig. 3, the apparatus may include:
an obtaining module 301, configured to obtain personal information to be queried, where the personal information includes names and organizations of persons;
a processing module 302, configured to obtain a corresponding person-related document based on the name and mechanism of the person;
an output module 303, configured to input the person-related document into a target person image model, and output a target sequence including a person image;
a generating module 304, configured to generate a target person image corresponding to the person information based on the target sequence.
In an embodiment of the present application, the apparatus is further configured to:
constructing a preset human figure image model;
acquiring a training data set, wherein each training data in the training data set comprises a character related document and a real sequence of a character portrait corresponding to the character related document;
and training the preset human image model by utilizing the training data set to obtain the target human image model.
In the general character image generation device provided by the application, the character information to be inquired is acquired, the character information comprises the name and the mechanism of a character, the corresponding character related document is acquired based on the name and the mechanism of the character, the character related document is input into the target character image model, the target sequence containing the character image is output, and the target character image corresponding to the character information is generated based on the target sequence. Therefore, according to the method and the device, the corresponding target figure portrait can be accurately generated only by acquiring the name and the mechanism of the figure, different solutions are made without different portrait labels, so that the figure portrait generation efficiency is improved, the repeated calculation and the resource waste are avoided, and the application range is wide.
In order to implement the above embodiments, the present application also provides a computer device.
The computer device provided by the embodiment of the application comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor; the processor, when executing the program, is capable of implementing the method as shown in fig. 1.
In order to implement the foregoing embodiments, the present application further provides a computer storage medium.
In the computer storage medium provided in the embodiments of the present application, the computer storage medium stores computer executable instructions; the computer-executable instructions, when executed by a processor, enable the method illustrated in fig. 1 to be implemented.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are exemplary and should not be construed as limiting the present application and that changes, modifications, substitutions and alterations in the above embodiments may be made by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method for generating a generic character image, the method comprising:
acquiring personal information to be inquired, wherein the personal information comprises names and mechanisms of people;
obtaining a corresponding person related document based on the name and the mechanism of the person;
inputting the figure-related document into a target figure image model, and outputting a target sequence containing figure images;
and generating a target character image corresponding to the character information based on the target sequence.
2. The method of claim 1, wherein prior to inputting the person-related document into a model of a target person image and outputting a target sequence including a person image, further comprising:
constructing a preset human figure image model;
acquiring a training data set, wherein each training data in the training data set comprises a character related document and a real sequence of a character portrait corresponding to the character related document;
and training the preset human figure image model by using the training data set to obtain a target human figure image model.
3. The method of claim 2, wherein the training the preset character image model using the training data set to obtain a target character image model comprises:
inputting the figure-related document in each training data into the preset figure image model, and outputting a prediction sequence containing the figure image;
obtaining a loss value corresponding to a loss function based on the prediction sequence and the corresponding real sequence;
and adjusting the preset human image model based on the loss value until convergence to obtain a target human image model.
4. The method of claim 1, wherein obtaining the corresponding person-related document based on the name and the organization of the person comprises:
obtaining a corresponding first candidate document set based on the name and the mechanism of the person, wherein the first candidate document set comprises a plurality of first candidate documents, and each first candidate document comprises a title and an abstract;
extracting the title and the abstract in each first candidate document as the content corresponding to each second candidate document, and generating a second candidate document set by using a plurality of second candidate documents and the index relationship of the first document set and the second document set;
and obtaining corresponding person-related documents based on the name and mechanism of the person, the first candidate document set and the second candidate document set.
5. The method of claim 4, wherein obtaining corresponding people-related documents based on the name and organization of the person, the first set of candidate documents, and the second set of candidate documents comprises:
performing word instantiation on the name and the mechanism of the character to obtain a first word segmentation set;
performing tokenization on each second candidate document to obtain a second word set corresponding to each second candidate document;
calculating similarity scores of the first word segmentation set and each second word segmentation set, and sequencing according to the similarity scores to obtain a preset number of second candidate documents;
obtaining a corresponding preset number of first candidate documents according to the preset number of second candidate documents and the index relation;
and splicing the contents of the first candidate documents with the preset number to obtain the corresponding character related documents.
6. The method of claim 4, wherein each of the first candidate documents further comprises a uniform resource locator url address and a body; the obtaining of the corresponding first candidate document set based on the name and the mechanism of the person comprises:
acquiring the name of the person and a search engine page corresponding to the mechanism;
analyzing the search engine record page based on the name and the mechanism of the character to obtain a plurality of search engine records, and extracting the url address, the title and the abstract of each search engine record;
accessing a webpage corresponding to each url address, and extracting content in the webpage to serve as a text corresponding to the url address;
and obtaining a corresponding first candidate document set based on the url addresses, the titles, the abstracts and the texts which are respectively corresponding to the plurality of search engine records, wherein each first candidate document corresponds to one search engine record and comprises the url address, the title, the abstract and the text which are corresponding to each search engine record.
7. A general-purpose person image generation apparatus, comprising:
the system comprises an acquisition module, a query module and a query module, wherein the acquisition module is used for acquiring the personal information to be queried, and the personal information comprises the name and the mechanism of a person;
the processing module is used for obtaining a corresponding person related document based on the name and the mechanism of the person;
the output module is used for inputting the character-related document into a target character image model and outputting a target sequence containing a character image;
and the generating module is used for generating a target character image corresponding to the character information based on the target sequence.
8. The apparatus of claim 7, wherein the apparatus is further configured to:
constructing a preset human figure image model;
acquiring a training data set, wherein each training data in the training data set comprises a character related document and a real sequence of a character portrait corresponding to the character related document;
and training the preset human figure image model by using the training data set to obtain a target human figure image model.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of claims 1-6 when executing the program.
10. A computer storage medium, wherein the computer storage medium stores computer-executable instructions; the computer-executable instructions, when executed by a processor, are capable of performing the method of any one of claims 1 to 6.
CN202211489143.4A 2022-11-25 2022-11-25 Universal character portrait generation method and device Pending CN115730215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211489143.4A CN115730215A (en) 2022-11-25 2022-11-25 Universal character portrait generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211489143.4A CN115730215A (en) 2022-11-25 2022-11-25 Universal character portrait generation method and device

Publications (1)

Publication Number Publication Date
CN115730215A true CN115730215A (en) 2023-03-03

Family

ID=85298300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211489143.4A Pending CN115730215A (en) 2022-11-25 2022-11-25 Universal character portrait generation method and device

Country Status (1)

Country Link
CN (1) CN115730215A (en)

Similar Documents

Publication Publication Date Title
Mehler et al. Genres on the web: Computational models and empirical studies
JP4365074B2 (en) Document expansion system with user-definable personality
US8150859B2 (en) Semantic table of contents for search results
CN1815477B (en) Method and system for providing semantic subjects based on mark language
CN110443571A (en) The method, device and equipment of knowledge based map progress resume assessment
US8577887B2 (en) Content grouping systems and methods
CN103886020B (en) A kind of real estate information method for fast searching
JP2012532395A (en) Selective content extraction
CN104123269A (en) Semi-automatic publication generation method and system based on template
CN114238573B (en) Text countercheck sample-based information pushing method and device
Evert A Lightweight and Efficient Tool for Cleaning Web Pages.
US20200175268A1 (en) Systems and methods for extracting and implementing document text according to predetermined formats
US20050138079A1 (en) Processing, browsing and classifying an electronic document
Alami et al. Hybrid method for text summarization based on statistical and semantic treatment
JP2007047974A (en) Information extraction device and information extraction method
CN115438162A (en) Knowledge graph-based disease question-answering method, system, equipment and storage medium
Martín-Valdivia et al. Using information gain to improve multi-modal information retrieval systems
US20050138028A1 (en) Processing, browsing and searching an electronic document
Martínez-González et al. On the evaluation of thesaurus tools compatible with the Semantic Web
Bia et al. The Miguel de Cervantes digital library: the Hispanic voice on the web
JP2021064143A (en) Sentence generating device, sentence generating method, and sentence generating program
JP5679400B2 (en) Category theme phrase extracting device, hierarchical tagging device and method, program, and computer-readable recording medium
CN115730215A (en) Universal character portrait generation method and device
CN114328895A (en) News abstract generation method and device and computer equipment
JP2010282403A (en) Document retrieval method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination