CN112685534A - Method and apparatus for generating context information of authored content during authoring process - Google Patents

Method and apparatus for generating context information of authored content during authoring process Download PDF

Info

Publication number
CN112685534A
CN112685534A CN202011545832.3A CN202011545832A CN112685534A CN 112685534 A CN112685534 A CN 112685534A CN 202011545832 A CN202011545832 A CN 202011545832A CN 112685534 A CN112685534 A CN 112685534A
Authority
CN
China
Prior art keywords
information
target
timeline
intermediate work
timeline information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011545832.3A
Other languages
Chinese (zh)
Other versions
CN112685534B (en
Inventor
程翰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011545832.3A priority Critical patent/CN112685534B/en
Publication of CN112685534A publication Critical patent/CN112685534A/en
Priority to PCT/CN2021/119605 priority patent/WO2022134683A1/en
Application granted granted Critical
Publication of CN112685534B publication Critical patent/CN112685534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An object of the present application is to provide a method and apparatus for generating context information of authored content in an authoring process, the method comprising: processing an intermediate work in an authoring process to obtain one or more first terms of the intermediate work; screening one or more key terms corresponding to the intermediate work from the one or more first terms; and generating word timeline information of the key words in the intermediate work according to the position information of the key words in the intermediate work. The application helps users participating in the work creation to quickly and comprehensively comb the veins of the intermediate works, facilitates the users participating in the work creation to search information, and simultaneously reduces the complex operation of the users participating in the work creation during the process of reviewing and combing the intermediate works.

Description

Method and apparatus for generating context information of authored content during authoring process
Technical Field
The present application relates to the field of communications, and more particularly, to a technique for generating context information of authored content during an authoring process.
Background
Currently, when an author creates a novel, the author usually needs to review the above to find the writing idea or to ensure the consistency of the front and back writing contents. Generally, authors assist in article grooming by marking or searching for corresponding content in the text based on keywords.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for generating context information of authored content during an authoring process.
According to an aspect of the present application, there is provided a method of generating context information of authored content in an authoring process, the method comprising:
processing an intermediate work in an authoring process to obtain one or more first terms of the intermediate work;
screening one or more key terms corresponding to the intermediate work from the one or more first terms;
generating word timeline information of the key words in the intermediate work according to the position information of the key words in the intermediate work, wherein the word timeline information comprises one or more time point information of the key words in the intermediate work.
According to an aspect of the present application, there is provided an apparatus for generating context information of authored content in an authoring process, the apparatus comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
processing an intermediate work in an authoring process to obtain one or more first terms of the intermediate work;
screening one or more key terms corresponding to the intermediate work from the one or more first terms;
generating word timeline information of the key words in the intermediate work according to the position information of the key words in the intermediate work, wherein the word timeline information comprises one or more time point information of the key words in the intermediate work.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to:
processing an intermediate work in an authoring process to obtain one or more first terms of the intermediate work;
screening one or more key terms corresponding to the intermediate work from the one or more first terms;
generating word timeline information of the key words in the intermediate work according to the position information of the key words in the intermediate work, wherein the word timeline information comprises one or more time point information of the key words in the intermediate work.
According to an aspect of the present application, there is provided an apparatus for generating context information of authored content in an authoring process, the apparatus comprising:
the one-to-one module is used for processing the intermediate work in the creation process to obtain one or more first words of the intermediate work;
a second module, configured to filter the one or more first terms to obtain one or more key terms corresponding to the intermediate work;
and a third module, configured to generate term timeline information of the key term in the intermediate work according to the location information of the key term in the intermediate work, where the term timeline information includes one or more time point information of the key term in the intermediate work.
Compared with the prior art, the method and the device have the advantages that the one or more first words of the intermediate works are obtained by performing word segmentation processing on the intermediate works in the creation process, the one or more key words corresponding to the intermediate works are screened out, word timeline information of the key words in the intermediate works is generated according to chapter position information of the key words in the intermediate works, a user participating in the creation of the works is helped to quickly and comprehensively comb the veins of the intermediate works, the user participating in the creation of the works can conveniently search information, and meanwhile, tedious operations of the user participating in the creation when the intermediate works are reviewed and combed are reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method of generating context information for authored content during an authoring process according to one embodiment of the present application;
FIG. 2 illustrates a block diagram of an apparatus for generating context information of authored content during an authoring process according to one embodiment of the present application;
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 shows a flowchart of a method for generating context information of authored content during an authoring process according to an embodiment of the present application, the method comprising steps S11, S12, and S13. In step S11, the device 1 processes the intermediate work in the authoring process to obtain one or more first words of the intermediate work; in step S12, the device 1 filters the one or more first terms to obtain a plurality of key terms corresponding to the intermediate work; in step S13, the device 1 generates term timeline information of the key term in the intermediate work according to the position information of the key term in the intermediate work, wherein the term timeline information includes one or more time point information of the key term in the intermediate work.
In step S11, the device 1 processes the intermediate work in the authoring process to obtain one or more first words of the intermediate work.
In some embodiments, the works include, but are not limited to, textual works such as novels, scripts, and audiovisual works such as broadcasters, movies, and the like. The intermediate work includes a work that has not yet been completed in the creation, such as a novel, including but not limited to the text of the completed chapter in the uncompleted novel, the text that the user is writing the completed chapter, and the like. In the following embodiments, unless otherwise specified, the present application will explain each embodiment by taking a text work such as a novel as an example; those skilled in the art will appreciate that the following embodiments may also be applied to other types of works.
In some embodiments, the device 1 may obtain the intermediate works uploaded by the user, or query the work database according to the work identification information corresponding to the work being created by the user to obtain the corresponding intermediate works. In some embodiments, if the intermediate work is a text-type intermediate work, the processing includes performing a word segmentation process on the intermediate work to obtain one or more first words. Further, the processing also comprises the step of performing part-of-speech analysis on the results after the word segmentation processing so as to ensure that the obtained first words are nouns and improve the efficiency of generating the venation information of the intermediate work; or, the processing further comprises character name recognition on the result after the word segmentation processing to obtain a first word containing the character object, so that the character object in the intermediate work can be conveniently combed with the vein information. In some embodiments, if the intermediate work comprises a video-audio type intermediate work, the processing comprises: converting audio information included in the intermediate work into text information, and processing the text information to obtain one or more first words; the processing method of the text information is the same as or similar to the processing method of the text type intermediate work in the foregoing embodiment, and therefore, the description is omitted and is included herein by reference.
In step S12, the device 1 filters one or more key terms corresponding to the intermediate work from the one or more first terms.
In some embodiments, the device 1 obtains one or more key terms corresponding to the intermediate work by filtering according to the accumulated word frequency of each first term in the intermediate work or the scene tag information to which each first term belongs. For example, the device 1 takes the first term with the cumulative word frequency greater than or equal to the predetermined threshold as the keyword corresponding to the intermediate work. Or the device 1 filters one or more key words matched with the work label information corresponding to the intermediate work according to the scene label information to which each first word belongs. Scene label information to which the first word belongs can be matched and determined in a label information base according to word characteristics corresponding to the first word. The tag information base includes a mapping relationship between word features and scene tag information, for example, scene tag information corresponding to a word including "knife" is "weapon", and scene tag information corresponding to a word including "car" is "traffic". The label information of the work corresponding to the intermediate work can be determined according to the type of the work. Different pieces of work tag information correspond to different pieces of scene tag information, for example, the tag information of the novel A is that of the martial arts, and the matched scene tag information is that of the weapon and the martial art.
In step S13, the device 1 generates term timeline information of the key term in the intermediate work according to the position information of the key term in the intermediate work, wherein the term timeline information includes one or more time point information of the key term in the intermediate work.
In some embodiments, the location information includes, but is not limited to, chapter location information or timeline information. The chapter position information includes intra-chapter position information of a chapter in which the keyword is located, for example, the keyword is located at 50% of chapter 8; or, the position information of the chapter where the key word is located in the intermediate work, for example, if the key word is in chapter 8, the novel currently has chapter 800, and the key word is in chapter 1%. The time schedule information comprises the playing time information of the audio information corresponding to the key word in the intermediate work, for example, if the audio information corresponding to a certain key word is played at a position 3 '15 "in a certain broadcast drama, the position information of the key word can be recorded as 3' 15"; alternatively, the key word corresponds to the playing progress information of the audio information in the intermediate work, for example, if a certain key word corresponds to the audio information played at 5% of a certain broadcast scenario, the key word position information may be recorded as 5%.
In some embodiments, the device 1 generates word timeline information of the key words in the intermediate work according to the chapter position information and the chapter order in the intermediate work. For example, if the keyword w in the novel a appears in 23% of chapter 1, 85% of chapter 75, 10% of chapter 366, and 77% of chapter 366 in the novel text, the word timeline corresponding to w is generated according to the chapter order: [ chap 1: 0.23, chap 75: 0.85, chap 366: 0.10, chap 366: 0.77].
In some embodiments, the device 1 generates word timeline information of the key words in the intermediate work according to the chapter position information and the completion time sequence of each chapter in the intermediate work. For example, if the keyword w in the novel a appears in the text of the novel at 23% of chapter 1, 85% of chapter 75, 10% of chapter 366, and 77% of chapter 366, and the completion time of chapter 75 in the novel is later than chapter 366, the word timeline corresponding to w is generated according to the chapter completion time sequence: [ chap 1: 0.23, chap 366: 0.10, chap 366: 0.77, chap 75: 0.85].
In some embodiments, the term timeline information further includes assignment information for each point in time information of the key terms in the intermediate work. The assignment information includes information on how often the keyword appears at the time point. For example, if the device 1 determines that chapters in the novel a where the keyword w appears are chapter 1 and chapter 5, chapter …, chapter n, and the frequency of occurrence of the corresponding keyword in each chapter is 3 and 1 … m, respectively, the word timeline corresponding to the keyword w may be: [ chap 1: 3, chap 5: 1, … chapn: m ].
In some embodiments, the method further comprises: step S14 (not shown), the apparatus 1 acquires point-of-interest information of the intermediate work by the user participating in the authoring process; step S15 (not shown), the device 1 determines one or more target keywords matching the point of interest information from the one or more keywords; step S16 (not shown), the device 1 generates content timeline information of the point of interest information in the intermediate work according to term timeline information corresponding to each of the one or more target key terms.
In step S14, the apparatus 1 acquires point-of-interest information of the intermediate work by the user who participates in the authoring process. In some embodiments, the point of interest information includes content of interest to the creator of the work during the creation process that is desired to be combed to obtain his/her context information in the work, e.g., a certain person, a certain item, etc. in the work. By acquiring the information of the focus, the device 1 can help the user to comb the content of the work concerned by the user, so that the user can review the texts and comb the creation thought conveniently. In some embodiments, the point of interest information may be obtained according to a trigger operation of a user. For example, the device 1 determines the point-of-interest information based on the text information selected by the user's operation such as clicking, long-pressing, or gesture; for another example, the device 1 collects voice information of the user through a microphone, and determines point-of-interest information corresponding to the voice information through voice recognition. In some embodiments, the point of interest information may also be determined by the device 1 from the user's current authored content. For example, the device 1 determines corresponding point-of-interest information according to the current input content of the user; for another example, the apparatus 1 determines corresponding point-of-interest information from chapter information (e.g., chapter name information) currently written by the user; for example, the device 1 determines the corresponding point of interest information according to one or more recently completed chapters of the novel, for example, the device 1 obtains a plurality of second words corresponding to the one or more chapters through word segmentation processing, and determines the corresponding point of interest information according to one or more second words having a frequency exceeding a predetermined frequency threshold in the one or more chapters or one or more second words having a frequency highest in the one or more chapters.
It should be understood by those skilled in the art that the above-mentioned method for obtaining point of interest information is only an example, and other existing or subsequent methods that may be used for obtaining point of interest information of a user on an intermediate work may be applicable to this embodiment, and should be included in the scope of protection of this embodiment, and are incorporated herein by reference.
In step S15, the device 1 determines one or more target keywords matching the point of interest information from the one or more keywords. In some embodiments, the apparatus 1 may obtain one or more attention terms corresponding to the point of attention information through a word segmentation process, and determine one or more target key terms matching the one or more attention terms from the one or more key terms. In some embodiments, the apparatus 1 may determine, according to the attention word, or by performing semantic analysis on the attention point information, target scene information corresponding to the attention point information, and further determine a plurality of target key words matching the target scene information.
In some embodiments, the step S15 includes: determining target scene information corresponding to the attention point information; and determining one or more target key words matched with the attention point information from the one or more key words according to the scene label information corresponding to the key words, wherein the scene label information corresponding to each target key word is matched with the target scene information. In some embodiments, the device 1 performs semantic analysis on the keyword or a paragraph where the keyword is located, and matches and determines scene tag information corresponding to the keyword in a tag information base. The device 1 determines scene label information matched with the target scene information according to the target scene information corresponding to the point of interest information, and further determines corresponding target key words according to the scene label information matched with the target scene information.
In step S16, the device 1 generates content timeline information of the point of interest information in the intermediate work according to term timeline information corresponding to each of the one or more target key terms. In some embodiments, the device 1 obtains target term timeline information corresponding to each target keyword in the intermediate work according to the one or more target keyword queries, and combines each target term timeline information to generate content timeline information corresponding to the point-of-interest information. For example, target word timeline information corresponding to the point of interest information acquired by the device 1 is shown in table 1. The device 1 combines the same time point information according to the target word timeline information to generate corresponding content timeline information: [ chap 1: 3, chap 10: 4, chap 17: 1, chap 188: 6, chap 256: 2, chap 344: 4, chap 598: 4, chap 660: 1].
TABLE 1 target term timeline example Table
Target key words Time 1 Frequency 1 Time point 2 Frequency 2 Time n Frequency n
w1 chap1 2 chap17 1 chap598 1
w2 chap10 4 chap188 1 chap598 3
w3 chap1 1 chap344 1 chap660 1
w4 chap188 5 chap256 2 chap344 3
In some embodiments, the step S16 includes: step S161 (not shown), the device 1 obtains one or more target term timeline information corresponding to the one or more target key terms, where each target term timeline information corresponds to one target key term in the one or more target key terms; step S162 (not shown), the device 1 merges the one or more target word timeline information in a time dimension to obtain content timeline information of the point of interest information in the intermediate work, where the content timeline information includes one or more time segment information, and each time segment information includes at least one time point information of at least one target word timeline information. In some embodiments, the apparatus 1 may determine one or more target term timeline information corresponding to the point of interest information in a term timeline information base corresponding to the intermediate work, where each target term timeline information corresponds to one of the one or more target key terms. In some embodiments, the device 1 merges the same or similar time point information (e.g., different schedules of the same chapter) in the target word timeline information to obtain content timeline information corresponding to the focus information. For example, the device 1 determines that the timeline information of the target keyword w1 corresponding to the point of interest information is [ chap 1: 0.1: 3, chap 17: 0.7: 1, chap 256: 0.4: 2, chap 598: 0.2: 1], the timeline information of the target keyword w2 is [ chap 1: 0.8: 1, chap 256: 0.4: 1, chap 660: 0.4: 1], the device 1 can convert "chap 256: 0.4: 2 "chap 256 at the same time point as in the timeline information of w 2: 0.4: 1' merging to obtain content timeline information corresponding to the information of the interest points: [ chap 1: 0.1: 3, chap 1: 0.8: 1, chap 17: 0.7: 1, chap 256: 0.4: 3, chap 598: 0.2: 1, chap 660: 0.4: 1]. Further, the device 1 may also set "chap 1: 0.1: 3 "chap 1 at a similar time point (e.g., the same chapter) in the timeline information of w 2: 0.8: 1' merging to obtain content timeline information corresponding to the information of the interest points: [ chap 1: 4, chap 17: 1, chap 256: 3, chap 598: 1, chap 660: 1].
In some embodiments, the content timeline information further includes assignment information of the point of interest information at each time segment information. In some embodiments, the assignment information of the point of interest information in each time period information includes a sum of assignment information corresponding to all time point information in the time period information. For example, from the timeline information of the target keyword w 1: [ chap 1: 0.1: 3, chap 17: 0.7: 1, chap 256: 0.4: 2, chap 598: 0.2: 1], timeline information for the target keyword w 2: [ chap 1: 0.8: 1, chap 256: 0.4: 1, chap 660: 0.4: determining content timeline information corresponding to the point-of-interest information: [ chap 1: 4, chap 17: 1, chap 256: 3, chap 598: 1, chap 660: 1], the time period information "chap 1: 4 "corresponding frequency information" 4 "is the sum of the frequency of occurrence of the target key words w1 and w2 in chapter 1. In some embodiments, the device 1 may determine, according to the assignment information, important time period information in the content timeline information, for example, determine, as the important time period information, time period information in which the corresponding assignment information is higher than a preset assignment information threshold, and preferentially display the corresponding work content to the user or display only the work content corresponding to the important time period information to the user, so as to help the user to quickly obtain the important work content corresponding to the content of interest of the user.
In some embodiments, the step S162 includes: the device 1 performs clustering processing on a plurality of time point information in the one or more target word timeline information to obtain one or more clusters, wherein each cluster contains at least one time point information in the one or more target word timeline information; and generating content timeline information of the point of interest information in the intermediate works according to the one or more clusters, wherein each cluster corresponds to one time segment information in the content timeline information. In some embodiments, the device 1 determines the time period corresponding to the cluster according to the time point information corresponding to the cluster boundary of the cluster. For example, the device 1 determines that the cluster boundary of one cluster is the time point information "chap 1" of the target keyword w 1: 0.5 "and time point information" chap2 of the target keyword w 3: 0.5 ″, it may be determined that time point information corresponding to all target keywords in the cluster is between chapter 1 and chapter 2, where time period information corresponding to the cluster is "chap 1: 0.5: chap 2: 0.5". And generating content timeline information of the point of interest information in the intermediate works according to the time segment information corresponding to all the clusters.
In some embodiments, the assignment information of the point of interest information in each time segment information in the content timeline information is determined based on each time segment information in a cluster corresponding to the time segment information and a target keyword corresponding to the time segment information. In some embodiments, the assignment information of the time period information includes frequency information of occurrence of a target keyword corresponding to the point of interest information in the time period. The assignment information of the time period information may be determined according to the time point information of all the target key words included in each cluster. For example, device 1 determines 2 time point information in a cluster containing the target keyword w 1: "chap 1: 0.5: 1 "," chap 2: 0.1: 2 ", 1 time point information of the target keyword w 3: "chap 2: 0.5: 1 ", the assignment information of the corresponding time period information is determined to be" 4 "according to the assignment information" 1 "," 2 ", and" 1 "corresponding to the 3 pieces of time point information.
In some embodiments, the key terms comprise a character object in the intermediate work, and the term timeline information corresponding to the character object comprises character timeline information of the character object in the intermediate work, wherein the character timeline information comprises one or more time point information of the character object in the intermediate work. In some embodiments, the device 1 performs name recognition of the character in the intermediate work and determines one or more character objects in the intermediate work. For example, a probability value for a word as a name component is trained on a corpus of names and used to calculate probabilities for candidate fields in the intermediate work as names, with the fields having probabilities above a predetermined probability threshold as the identified person names. It should be understood by those skilled in the art that the above method for identifying a person name is merely an example, and other existing or subsequent methods for identifying a person name that may be used in the present embodiment may be included in the scope of the present embodiment, and are herein incorporated by reference. In some embodiments, the device 1 generates the character timeline information of the character object in the intermediate work according to the determined character object and the position information of the character object in the intermediate work.
In some embodiments, the method further comprises step S17 (not shown), the device 1 obtaining a plurality of target person timeline information corresponding to a plurality of target person objects in the intermediate work, wherein each target person timeline information corresponds to one of the plurality of target person objects; and generating the figure association timeline information of the plurality of target figure objects in the intermediate work according to the plurality of target figure timeline information, wherein the figure association timeline information comprises one or more time segment information, and each time segment information comprises at least one time point information in at least one piece of target figure timeline information. In some embodiments, the device 1 determines the target person object concerned by the device according to the text information input by the user or selected by the user through clicking, long pressing or gesture, or the device 1 collects the voice information of the user through a microphone and determines the target person object corresponding to the voice information through voice recognition. In some embodiments, the target character object may also be determined by the device 1 based on the user's current authored content. In some embodiments, the device 1 determines target person timeline information corresponding to the target person object in a term timeline information base corresponding to the intermediate work, and combines and generates person association timeline information of the plurality of target person objects in the intermediate work according to the plurality of target person timeline information in a time dimension. Or clustering according to a plurality of time point information in the plurality of target character timeline information to determine character association timeline information of the plurality of target character objects in the intermediate work. Here, the method of determining the target person object is the same as or substantially the same as the method of determining the point of interest information in the aforementioned step S14, and the manner of generating the person-associated timeline information is the same as or substantially the same as the method of generating the content timeline information in the aforementioned step S16, and therefore, the description thereof is omitted, and the method is incorporated herein by reference.
In some embodiments, the method further comprises step S18 (not shown), the device 1 generating person relationship graph information for the plurality of target person objects in the intermediate work from the person association timeline information. In some embodiments, the device 1 determines the personal relationship information of the target person objects according to the work content information corresponding to the time segment information included in the person association timeline information, and generates the personal relationship map information of the target person objects in the intermediate work according to the one or more pieces of personal relationship information. For example, as shown in table 2, the device 1 determines character association timeline information [ chap1, chap17, chap256, chap660] from target character timeline information corresponding to target character objects "zhang", "lisu", and "wang wu", in which "zhang" appears at chap1, chap17, and chap256, "lisu" appears at chap17 and chap256, and "wang wu" appears at chap1 and chap660, and the device 1 acquires piece content information corresponding to 4 pieces of time zone information in the character association timeline information. The device 1 may obtain a text keyword of the work content information corresponding to each time period information through an algorithm such as a term frequency-inverse text frequency method (TF-IDF), a text ranking (TextRank), and the like, and further determine character relationship information according to the text keyword. For example, the apparatus 1 determines that the text keyword of chapter 1 is "shanghai", may determine that "zhang san" and "wangwi" are associated by "shanghai", and may add the text keyword "shanghai" as the character relationship information of the target character objects "zhang san" and "wangwi" to the character relationship map information.
TABLE 2 target character object and text keyword example Table
Figure BDA0002855705170000131
In some embodiments, the step S18 further includes: the device 1 predicts future character relation map information of the plurality of target character objects in the intermediate work according to the character relation map information and the character association timeline information. In some embodiments, the device 1 predicts future character relationship graph information of the target characters in the intermediate work according to a preset character object development template by combining the character relationship graph information and the character association timeline information. Alternatively, the device 1 generates future personal relationship information of a target personal object not containing the personal relationship information from the personal relationship information of the target personal object in the personal relationship map information, and generates future personal relationship map information of the plurality of target personal objects in the intermediate work from the future personal relationship information. For example, referring to table 2, it can be seen that the personal relationship information of the target personal objects "zhang san" and "lie si" includes "train station", and the target personal objects "zhang san" and "wang wu" do not contain the personal relationship information "train station". The device 1 may use the "train station" as future character relationship information of the target character objects "zhang san" and "wang wu", and further generate future character relationship map information, where the future character relationship map information may further include text information corresponding to time period information to which the character relationship information "train station" belongs. In the embodiment, the future character relation map information is predicted through the character relation map information and the character association timeline information, reference of the future creation direction of the work can be provided for the author, and the writing efficiency of the author is improved.
In some embodiments, the method further includes step S19 (not shown), the device 1 detects whether the intermediate work satisfies an authoring assisting trigger condition during the authoring of the intermediate work; if yes, determining one or more auxiliary key words corresponding to the auxiliary creation triggering conditions, and providing creation reference information corresponding to the intermediate work according to word timeline information corresponding to the auxiliary key words.
In some embodiments, the authoring assistance trigger condition comprises at least any one of: the user character or voice input rate is less than or equal to a preset character or voice input rate threshold value; the time for which the user does not input characters or voice is greater than or equal to a preset time threshold; the user text or voice input speed rate reduction value is larger than or equal to a preset speed rate reduction threshold value in a preset time period; searching in the intermediate work by the user; the number of words of the text deleted by the user at a time is greater than or equal to a preset deleted word number threshold; the number of times that the user deletes the text in a preset time period is larger than or equal to a preset deletion number threshold value; the voice time length deleted once by the user is larger than or equal to the preset deleting time length threshold value. For example, when it is detected that the user does not input text or voice for a long time, or the text or voice input rate is less than or equal to the predetermined text or voice input rate threshold, or the text or voice input rate is decreased too fast, it may be considered that the user lacks inspiration in authoring, thereby causing a decrease in authoring efficiency. For another example, when a user is detected to extensively delete or frequently delete his or her current authoring content, the user may be deemed dissatisfied with the current authoring content or may not know how to accurately express what he or she wants to author. When these conditions are detected, the device 1 may provide the user with corresponding authoring reference information to help the user perform authoring and improve the user authoring efficiency. Furthermore, the user may actively trigger this function by searching in the intermediate work, so that the device 1 may provide help to the user in a timely manner.
In some embodiments, the auxiliary key words include, but are not limited to, words currently being composed by the user, key words corresponding to one or more sentences that the user has recently composed content, key words used by the user when conducting a search, or key words corresponding to text or speech deleted by the user. The equipment 1 determines the corresponding term timeline information according to the auxiliary key term query, and provides the authoring reference information corresponding to the intermediate works according to the term timeline information.
In some embodiments, the providing authoring reference information corresponding to the intermediate text according to the term timeline information corresponding to the auxiliary key term includes: the equipment 1 takes word timeline information corresponding to at least one auxiliary key word in the one or more auxiliary key words as authoring reference information corresponding to the intermediate text and provides the authoring reference information to users participating in the authoring process; or combining word timeline information corresponding to at least two auxiliary key words in the one or more auxiliary key words according to the time dimension, and providing the combined timeline information as authoring reference information corresponding to the intermediate work to users participating in the authoring process. For example, term timeline information corresponding to the one or more auxiliary key terms may be provided to the user for reference either individually or in combination as authoring reference information. For another example, the apparatus 1 may use, as the authoring reference information corresponding to the intermediate work, the term timeline information for which the assignment information of the term timeline information corresponding to the auxiliary key term is greater than or equal to the first assignment information threshold value; and merging the time line information of other words according to the time dimension, taking the merged time line information as the creation reference information corresponding to the intermediate work, and providing the two types of creation reference information for the user for reference. For example, the apparatus 1 may generate the authoring reference timeline information from time point information at which the assignment information corresponding to the time point information is greater than or equal to the second assignment information threshold value among the word timeline information corresponding to all the auxiliary key words, and provide it as the authoring reference information corresponding to the intermediate work to the user who participates in the authoring process.
Here, the merging manner of the term timeline information is the same as or substantially the same as the method of generating the content timeline information in the aforementioned step S16, and the method of generating the authoring reference timeline information according to the time point information is the same as or substantially the same as the method of generating the term timeline information in the aforementioned step S13, so that the description thereof is omitted and is herein incorporated by reference.
Fig. 2 shows a block diagram of an apparatus for generating context information of authored content during an authoring process according to an embodiment of the present application, said apparatus 1 comprising a one-module 11, a two-module 12, and a three-module 13. A module 11 processes an intermediate work in an authoring process to obtain one or more first words of the intermediate work; a secondary module 12 filters one or more key terms corresponding to the intermediate work from the one or more first terms; here, the specific implementation manners of the one-to-one module 11, the two-to-two module 12, and the one-to-three module 13 shown in fig. 2 are respectively the same as or similar to the specific embodiments of the foregoing step S11, step S12, and step S13, and therefore are not described again and are included herein by way of reference.
In some embodiments, the apparatus 1 further comprises a four module 14 (not shown), a five module 15 (not shown), and a six module 16 (not shown). A fourth module 14 acquires the point of interest information of the intermediate work from the users participating in the authoring process; a fifthly module 15 determines one or more target keywords matching the point of interest information from the one or more keywords; a sixth module 16 generates content timeline information of the point of interest information in the intermediate work according to term timeline information corresponding to each target key term in the one or more target key terms. Here, the embodiments of the four module 14, the five module 15, and the six module 16 are the same as or similar to the embodiments of the step S14, the step S15, and the step S16, respectively, and therefore are not repeated herein and are included by reference.
In some embodiments, the sixty-one module 16 includes a sixty-one cell 161 (not shown) and a sixty-two cell 162 (not shown). A sixteenth unit 161 obtaining one or more target term timeline information corresponding to the one or more target key terms, wherein each target term timeline information corresponds to one target key term of the one or more target key terms; a sixteenth and twenty unit 162 merges the one or more target word timeline information in a time dimension to obtain content timeline information of the point of interest information in the intermediate work, wherein the content timeline information includes one or more time segment information, and each time segment information includes at least one time point information of at least one target word timeline information. Here, the embodiments of the one-six-one unit 161 and the one-six-two unit 162 are the same as or similar to the embodiments of the step S161 and the step S162, and therefore are not repeated herein and are included herein by reference.
In some embodiments, the apparatus 1 further comprises a seven module 17 (not shown). A seventh module 17 obtains a plurality of target character timeline information corresponding to a plurality of target character objects in the intermediate work, wherein each target character timeline information corresponds to one of the plurality of target character objects; and generating the figure association timeline information of the plurality of target figure objects in the intermediate work according to the plurality of target figure timeline information, wherein the figure association timeline information comprises one or more time segment information, and each time segment information comprises at least one time point information in at least one piece of target figure timeline information. Here, the embodiment of the seventh module 17 is the same as or similar to the embodiment of the step S17, and therefore, the detailed description is omitted, and the embodiment is incorporated herein by reference.
In some embodiments, the apparatus 1 further comprises an eight module 18 (not shown). An eight module 18 generates the person relationship graph information of the plurality of target person objects in the intermediate work according to the person association timeline information. Here, the embodiment of the eight module 18 is the same as or similar to the embodiment of the step S18, and therefore, the description thereof is omitted here for brevity.
In some embodiments, the apparatus 1 further comprises a nine-module 19 (not shown). A nine module 19 detects whether the intermediate work satisfies an authoring assisting trigger condition during the authoring process of the intermediate work; if yes, determining one or more auxiliary key words corresponding to the auxiliary creation triggering conditions, and providing creation reference information corresponding to the intermediate work according to word timeline information corresponding to the auxiliary key words. Here, the embodiment of the nine module 19 is the same as or similar to the embodiment of the step S19, and therefore, the detailed description is omitted, and the detailed description is incorporated herein by reference.
FIG. 3 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 3, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (15)

1. A method for generating context information of authored content during an authoring process, wherein the method comprises:
processing an intermediate work in an authoring process to obtain one or more first terms of the intermediate work;
screening one or more key terms corresponding to the intermediate work from the one or more first terms;
generating word timeline information of the key words in the intermediate work according to the position information of the key words in the intermediate work, wherein the word timeline information comprises one or more time point information of the key words in the intermediate work.
2. The method of claim 1, wherein the method further comprises:
obtaining the information of the attention points of the users participating in the creation process to the intermediate works;
determining one or more target keywords matching the point of interest information from the one or more keywords;
and generating content timeline information of the point of interest information in the intermediate work according to term timeline information corresponding to each target key term in the one or more target key terms.
3. The method of claim 2, wherein the generating content timeline information of the point of interest information in the intermediate work according to term timeline information corresponding to each of the one or more target key terms comprises:
acquiring one or more target word timeline information corresponding to the one or more target key words, wherein each target word timeline information corresponds to one target key word in the one or more target key words;
merging the one or more target word timeline information in a time dimension to obtain content timeline information of the point of interest information in the intermediate work, wherein the content timeline information comprises one or more time segment information, and each time segment information comprises at least one time point information of at least one target word timeline information.
4. The method of claim 3, wherein the content timeline information further comprises assignment information of the point of interest information at each time segment information.
5. The method of claim 3 or 4, wherein the merging the one or more target term timeline information in a temporal dimension to obtain content timeline information for the point of interest information in the intermediate work, wherein the content timeline information comprises one or more time segment information, each time segment information comprising at least one time point information of at least one of the target term timeline information comprises:
performing clustering processing on a plurality of time point information in the one or more target word timeline information to obtain one or more clusters, wherein each cluster contains at least one time point information in the one or more target word timeline information;
and generating content timeline information of the point of interest information in the intermediate works according to the one or more clusters, wherein each cluster corresponds to one time segment information in the content timeline information.
6. The method according to claim 5, wherein the assignment information of the point of interest information in each time segment information in the content timeline information is determined based on each time point information in a cluster corresponding to the time segment information and the target keyword corresponding to the time point information.
7. The method of any of claims 2 to 6, wherein the determining one or more target keywords from the one or more keywords that match the point of interest information comprises:
determining target scene information corresponding to the attention point information;
and determining one or more target key words matched with the attention point information from the one or more key words according to the scene label information corresponding to the key words, wherein the scene label information corresponding to each target key word is matched with the target scene information.
8. The method of any of claims 1-7, wherein the key terms comprise a character object in the intermediate work, the term timeline information corresponding to the character object comprising character timeline information for the character object in the intermediate work, wherein the character timeline information comprises one or more points in time information for the character object in the intermediate work.
9. The method of claim 8, wherein the method further comprises:
obtaining a plurality of target character timeline information corresponding to a plurality of target character objects in the intermediate work, wherein each target character timeline information corresponds to one target character object in the plurality of target character objects;
and generating the figure association timeline information of the plurality of target figure objects in the intermediate work according to the plurality of target figure timeline information, wherein the figure association timeline information comprises one or more time segment information, and each time segment information comprises at least one time point information in at least one piece of target figure timeline information.
10. The method of claim 9, wherein the method further comprises:
and generating the figure relation map information of the plurality of target figure objects in the intermediate work according to the figure association timeline information.
11. The method of claim 10, wherein the generating of the person relationship graph information for the plurality of target person objects in the intermediate work from the person association timeline information further comprises:
and predicting future character relation map information of the target character objects in the intermediate work according to the character relation map information and the character association timeline information.
12. The method of any of claims 1 to 11, wherein the method further comprises:
detecting whether the intermediate work meets an auxiliary creation triggering condition in the creation process of the intermediate work;
if yes, determining one or more auxiliary key words corresponding to the auxiliary creation triggering conditions, and providing creation reference information corresponding to the intermediate work according to word timeline information corresponding to the auxiliary key words.
13. The method of claim 12, wherein the providing authoring reference information corresponding to the intermediate work according to term timeline information corresponding to the auxiliary key terms comprises:
taking term timeline information corresponding to at least one auxiliary key term in the one or more auxiliary key terms as authoring reference information corresponding to the intermediate work, and providing the authoring reference information to users participating in the authoring process; or,
and combining word timeline information corresponding to at least two auxiliary key words in the one or more auxiliary key words according to the time dimension, and providing the combined timeline information as authoring reference information corresponding to the intermediate work to users participating in the authoring process.
14. An apparatus for generating context information of authored content in an authoring process, the apparatus comprising:
a processor, and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any of claims 1 to 13.
15. A computer-readable medium storing instructions that, when executed by a computer, cause the computer to perform operations of any of the methods of claims 1-13.
CN202011545832.3A 2020-12-23 2020-12-23 Method and apparatus for generating context information of authored content during authoring process Active CN112685534B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011545832.3A CN112685534B (en) 2020-12-23 2020-12-23 Method and apparatus for generating context information of authored content during authoring process
PCT/CN2021/119605 WO2022134683A1 (en) 2020-12-23 2021-09-22 Method and device for generating context information of written content in writing process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011545832.3A CN112685534B (en) 2020-12-23 2020-12-23 Method and apparatus for generating context information of authored content during authoring process

Publications (2)

Publication Number Publication Date
CN112685534A true CN112685534A (en) 2021-04-20
CN112685534B CN112685534B (en) 2022-12-30

Family

ID=75451507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011545832.3A Active CN112685534B (en) 2020-12-23 2020-12-23 Method and apparatus for generating context information of authored content during authoring process

Country Status (2)

Country Link
CN (1) CN112685534B (en)
WO (1) WO2022134683A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420553A (en) * 2021-07-21 2021-09-21 北京小米移动软件有限公司 Text generation method and device, storage medium and electronic equipment
WO2022134683A1 (en) * 2020-12-23 2022-06-30 上海掌门科技有限公司 Method and device for generating context information of written content in writing process

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334784A (en) * 2008-07-30 2008-12-31 施章祖 Computer auxiliary report and knowledge base generation method
US20090055389A1 (en) * 2007-08-20 2009-02-26 Google Inc. Ranking similar passages
CN102572356A (en) * 2012-01-16 2012-07-11 华为技术有限公司 Conference recording method and conference system
CN103324718A (en) * 2013-06-25 2013-09-25 百度在线网络技术(北京)有限公司 Topic venation digging method and system based on massive searching logs
US20150302084A1 (en) * 2014-04-17 2015-10-22 Robert Stewart Data mining apparatus and method
CN107704572A (en) * 2017-09-30 2018-02-16 北京奇虎科技有限公司 The creation angle method for digging and device of people entities
CN108595403A (en) * 2018-04-28 2018-09-28 掌阅科技股份有限公司 Processing method, computing device for assisting writing and storage medium
CN109508448A (en) * 2018-07-17 2019-03-22 网易传媒科技(北京)有限公司 Short information method, medium, device are generated based on long article and calculate equipment
CN109522390A (en) * 2018-11-14 2019-03-26 山东大学 A kind of search result methods of exhibiting and device
CN109522402A (en) * 2018-10-22 2019-03-26 国家电网有限公司 A kind of abstract extraction method and storage medium based on power industry characteristic key words
CN110175220A (en) * 2019-05-16 2019-08-27 镇江市高等专科学校 A kind of file similarity measure method and system based on the distribution of keyword positional structure
CN110457439A (en) * 2019-08-06 2019-11-15 北京如优教育科技有限公司 One-stop intelligent writes householder method, device and system
CN110851538A (en) * 2020-01-15 2020-02-28 支付宝(杭州)信息技术有限公司 Block chain-based content generation method, device, equipment and storage medium
CN110851797A (en) * 2020-01-13 2020-02-28 支付宝(杭州)信息技术有限公司 Block chain-based work creation method and device and electronic equipment
CN111240673A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Interactive graphic work generation method, device, terminal and storage medium
CN111368063A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Information pushing method based on machine learning and related device
CN111680152A (en) * 2020-06-10 2020-09-18 创新奇智(成都)科技有限公司 Method and device for extracting abstract of target text, electronic equipment and storage medium
CN111753508A (en) * 2020-06-29 2020-10-09 网易(杭州)网络有限公司 Method and device for generating content of written works and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070005698A1 (en) * 2005-06-29 2007-01-04 Manish Kumar Method and apparatuses for locating an expert during a collaboration session
CN112685534B (en) * 2020-12-23 2022-12-30 上海掌门科技有限公司 Method and apparatus for generating context information of authored content during authoring process

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090055389A1 (en) * 2007-08-20 2009-02-26 Google Inc. Ranking similar passages
CN101334784A (en) * 2008-07-30 2008-12-31 施章祖 Computer auxiliary report and knowledge base generation method
CN102572356A (en) * 2012-01-16 2012-07-11 华为技术有限公司 Conference recording method and conference system
CN103324718A (en) * 2013-06-25 2013-09-25 百度在线网络技术(北京)有限公司 Topic venation digging method and system based on massive searching logs
US20150302084A1 (en) * 2014-04-17 2015-10-22 Robert Stewart Data mining apparatus and method
CN107704572A (en) * 2017-09-30 2018-02-16 北京奇虎科技有限公司 The creation angle method for digging and device of people entities
CN108595403A (en) * 2018-04-28 2018-09-28 掌阅科技股份有限公司 Processing method, computing device for assisting writing and storage medium
CN109508448A (en) * 2018-07-17 2019-03-22 网易传媒科技(北京)有限公司 Short information method, medium, device are generated based on long article and calculate equipment
CN109522402A (en) * 2018-10-22 2019-03-26 国家电网有限公司 A kind of abstract extraction method and storage medium based on power industry characteristic key words
CN109522390A (en) * 2018-11-14 2019-03-26 山东大学 A kind of search result methods of exhibiting and device
CN110175220A (en) * 2019-05-16 2019-08-27 镇江市高等专科学校 A kind of file similarity measure method and system based on the distribution of keyword positional structure
CN110457439A (en) * 2019-08-06 2019-11-15 北京如优教育科技有限公司 One-stop intelligent writes householder method, device and system
CN111240673A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Interactive graphic work generation method, device, terminal and storage medium
CN110851797A (en) * 2020-01-13 2020-02-28 支付宝(杭州)信息技术有限公司 Block chain-based work creation method and device and electronic equipment
CN110851538A (en) * 2020-01-15 2020-02-28 支付宝(杭州)信息技术有限公司 Block chain-based content generation method, device, equipment and storage medium
CN111368063A (en) * 2020-03-06 2020-07-03 腾讯科技(深圳)有限公司 Information pushing method based on machine learning and related device
CN111680152A (en) * 2020-06-10 2020-09-18 创新奇智(成都)科技有限公司 Method and device for extracting abstract of target text, electronic equipment and storage medium
CN111753508A (en) * 2020-06-29 2020-10-09 网易(杭州)网络有限公司 Method and device for generating content of written works and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134683A1 (en) * 2020-12-23 2022-06-30 上海掌门科技有限公司 Method and device for generating context information of written content in writing process
CN113420553A (en) * 2021-07-21 2021-09-21 北京小米移动软件有限公司 Text generation method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112685534B (en) 2022-12-30
WO2022134683A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
US20200279002A1 (en) Method and system for processing unclear intent query in conversation system
CN107256267B (en) Query method and device
US9923860B2 (en) Annotating content with contextually relevant comments
CN107832433B (en) Information recommendation method, device, server and storage medium based on conversation interaction
JP2019511036A (en) System and method for linguistic feature generation across multiple layer word representations
US20110252018A1 (en) System and method for creating search index on cloud database
CN107948730B (en) Method, device and equipment for generating video based on picture and storage medium
JPWO2012147428A1 (en) Text clustering apparatus, text clustering method, and program
CN112685534B (en) Method and apparatus for generating context information of authored content during authoring process
US20140214402A1 (en) Implementation of unsupervised topic segmentation in a data communications environment
CN107861948B (en) Label extraction method, device, equipment and medium
CN114328996A (en) Method and device for publishing information
CN110472013A (en) A kind of hot topic update method, device and computer storage medium
CN111078849B (en) Method and device for outputting information
US20150339310A1 (en) System for recommending related-content analysis in an authoring environment
CN110768894B (en) Method and equipment for deleting session message
CN112784016A (en) Method and equipment for detecting speech information
US20210150270A1 (en) Mathematical function defined natural language annotation
CN111723235B (en) Music content identification method, device and equipment
CN115269889B (en) Clip template searching method and device
US20220189472A1 (en) Recognition and restructuring of previously presented materials
CN111488450A (en) Method and device for generating keyword library and electronic equipment
US9946762B2 (en) Building a domain knowledge and term identity using crowd sourcing
CN103870476A (en) Retrieval method and device
CN106959945B (en) Method and device for generating short titles for news based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant