WO2022134683A1 - 在创作过程中生成已创作内容的脉络信息的方法与设备 - Google Patents

在创作过程中生成已创作内容的脉络信息的方法与设备 Download PDF

Info

Publication number
WO2022134683A1
WO2022134683A1 PCT/CN2021/119605 CN2021119605W WO2022134683A1 WO 2022134683 A1 WO2022134683 A1 WO 2022134683A1 CN 2021119605 W CN2021119605 W CN 2021119605W WO 2022134683 A1 WO2022134683 A1 WO 2022134683A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
timeline
target
word
intermediate work
Prior art date
Application number
PCT/CN2021/119605
Other languages
English (en)
French (fr)
Inventor
程翰
Original Assignee
上海掌门科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海掌门科技有限公司 filed Critical 上海掌门科技有限公司
Publication of WO2022134683A1 publication Critical patent/WO2022134683A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present application relates to the field of communications, and in particular, to a technology for generating context information of authored content in the authoring process.
  • An object of the present application is to provide a method and device for generating context information of created content during the authoring process.
  • a method for generating context information of created content during the authoring process comprising:
  • One or more key words corresponding to the intermediate work are obtained by filtering from the one or more first words;
  • a device for generating context information of authored content during the authoring process comprising:
  • memory arranged to store computer-executable instructions which, when executed, cause the processor to:
  • One or more key words corresponding to the intermediate work are obtained by filtering from the one or more first words;
  • a computer-readable medium storing instructions that, when executed, cause a system to:
  • One or more key words corresponding to the intermediate work are obtained by filtering from the one or more first words;
  • a device for generating context information of authored content during the authoring process comprising:
  • a module used to process the intermediate work in the creation process to obtain one or more first words of the intermediate work
  • Modules 1 and 2 are used to filter and obtain one or more key words corresponding to the intermediate work from the one or more first words;
  • Module 1 and 3 configured to generate word timeline information of the key word in the intermediate work according to the position information of the key word in the intermediate work, wherein the word timeline information includes the One or more time point information of the key word in the intermediate work.
  • the present application obtains one or more first words of the intermediate work by performing word segmentation processing on the intermediate work in the creation process, from which one or more key words corresponding to the intermediate work are screened out, and according to the key words
  • the chapter position information of the word in the intermediate work generates the word timeline information of the key word in the intermediate work, which helps the users participating in the creation of the work to quickly and comprehensively sort out the context of the intermediate work, and facilitates the users participating in the creation of the work to find information , while reducing the cumbersome operations of users participating in the creation of works when reviewing and sorting out intermediate works.
  • FIG. 1 shows a flowchart of a method for generating context information of created content in an authoring process according to an embodiment of the present application
  • FIG. 2 shows a structural diagram of a device for generating context information of created content in an authoring process according to an embodiment of the present application
  • FIG. 3 illustrates an exemplary system that may be used to implement various embodiments described in this application.
  • the terminal, the device serving the network, and the trusted party all include one or more processors (for example, a central processing unit (CPU)), an input/output interface, a network interface, and Memory.
  • processors for example, a central processing unit (CPU)
  • Memory may include non-persistent memory in computer-readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory ( Flash Memory).
  • RAM random access memory
  • ROM read only memory
  • Flash Memory Flash Memory
  • Memory is an example of a computer-readable medium.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (Static Random-Access Memory, SRAM), Dynamic Random Access Memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically-Erasable Programmable Read -Only Memory, EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD), or other optical storage , magnetic tape cartridges, magnetic tape-disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PCM Phase-Change Memory
  • PRAM Programmable Random Access Memory
  • SRAM Static
  • the equipment referred to in this application includes, but is not limited to, user equipment, network equipment, or equipment formed by integrating user equipment and network equipment through a network.
  • the user equipment includes, but is not limited to, any mobile electronic product that can perform human-computer interaction with the user (for example, human-computer interaction through a touchpad), such as a smart phone, a tablet computer, etc., and the mobile electronic product can use any operation. system, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculation and information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (ASICs) ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • the network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud formed by a plurality of servers; here, the cloud is formed by a large number of computers or network servers based on cloud computing, Among them, cloud computing is a kind of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes but is not limited to the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless ad hoc network (Ad Hoc network), and the like.
  • the device may also be a program running on the user equipment, network equipment, or a device formed by user equipment and network equipment, network equipment, touch terminal or network equipment and touch terminal integrated through a network.
  • Fig. 1 shows a flow chart of a method for generating context information of created content in an authoring process according to an embodiment of the present application.
  • the method includes step S11, step S12, and step S13.
  • step S11 the device 1 processes the intermediate work in the creation process to obtain one or more first words of the intermediate work; in step S12, the device 1 obtains one or more first words from the one or more first words Screen to obtain a plurality of key words corresponding to the intermediate work; in step S13, the device 1 generates the words of the key words in the intermediate work according to the location information of the key words in the intermediate work Timeline information, wherein the term timeline information includes one or more time point information of the key term in the intermediate work.
  • step S11 the device 1 processes the intermediate work in the creation process to obtain one or more first words of the intermediate work.
  • the works include but are not limited to text works such as novels and plays, and audio-visual works such as radio dramas and movies.
  • the intermediate works include works that are being created and have not yet been completed. Taking novels as an example, the intermediate works include but are not limited to the completed chapter texts in the unfinished novels, the completed texts in the chapters the user is writing, and the like.
  • the present application will take the creation of novels and other text works as examples to illustrate each embodiment; those skilled in the art should understand that the following embodiments can also be applied to other types of works.
  • the device 1 can obtain the intermediate works uploaded by the user, or query the work database according to the work identification information corresponding to the works being created by the user to obtain the corresponding intermediate works.
  • the processing includes performing a word segmentation process on the intermediate work to obtain one or more first words. Further, the processing also includes performing part-of-speech analysis on the result of the word segmentation processing, so as to ensure that the first words obtained are nouns and improve the efficiency of generating the context information of the intermediate work; or, the processing also includes processing the result of the word segmentation processing. Perform character name recognition to obtain the first word containing the character object, which is convenient for sorting out the context information of the character objects in the intermediate works.
  • the processing includes: first converting the audio information included in the intermediate work into text information, and then processing the text information to obtain a or a plurality of first words; wherein, the processing method of the text information is the same as or similar to the processing method of the text-type intermediate work in the foregoing embodiment, so it is not repeated here, and is included here by reference.
  • step S12 the device 1 selects one or more key words corresponding to the intermediate work from the one or more first words.
  • the device 1 obtains one or more key terms corresponding to the intermediate work by filtering according to the cumulative term frequency of each first term in the intermediate work or scene tag information to which each first term belongs. For example, the device 1 takes the first word whose accumulated word frequency is greater than or equal to a predetermined threshold as the key word corresponding to the intermediate work. Or, the device 1 filters one or more key words matching the work label information corresponding to the intermediate work according to the scene label information to which each first word belongs. The scene label information to which the first word belongs may be determined by matching in the label information database according to the word feature corresponding to the first word.
  • the label information library contains the mapping relationship between word features and scene label information.
  • the scene label information corresponding to the word including "knife” is “weapon”
  • the scene label information corresponding to the word including "car” is "traffic”.
  • the work tag information corresponding to the intermediate work may be determined according to the work type. Different work tag information corresponds to different scene tag information. For example, the tag information of Novel A is “Martial Arts”, and the matching scene tag information is “weapon”, “martial arts” and so on.
  • step S13 the device 1 generates the word timeline information of the key word in the intermediate work according to the position information of the key word in the intermediate work, wherein the word timeline information includes all the words in the intermediate work. Information about one or more time points of the predicate term in the intermediate work.
  • the location information includes, but is not limited to, chapter location information or time progress information.
  • the chapter location information includes the intra-chapter location information of the chapter where the key word is located, for example, the key word is located at 50% of Chapter 8; or, the location information of the chapter where the key word is located in the intermediate work, for example, The key word is in Chapter 8, and the novel currently has 800 chapters, so the key word is at 1% of the novel.
  • the time progress information includes the playback time information of the audio information corresponding to the key word in the intermediate work.
  • the position information of the key word can be recorded as 3′15′′; or, the playback progress information of the audio information corresponding to the key word in the intermediate work, for example, if the audio information corresponding to a key word is played at 5% of a certain radio play, the position information of the key word can be recorded as 5 %.
  • the device 1 generates word timeline information of the key words in the intermediate work according to the chapter position information and the order of chapters in the intermediate work. For example, if the keyword w in novel A appears in 23% of chapter 1, 85% of chapter 75, 10% of chapter 366, and 77% of chapter 366, it will be generated in the order of its chapters. Word timeline corresponding to w: [chap1: 0.23, chap75: 0.85, chap366: 0.10, chap366: 0.77].
  • the device 1 generates word timeline information of the key words in the intermediate work according to the chapter position information and the completion time sequence of each chapter in the intermediate work.
  • the keyword w in novel A occurs 23% of chapter 1, 85% of chapter 75, 10% of chapter 366, 77% of chapter 366, and chapter 75 of the novel's text.
  • the word timeline corresponding to w is generated according to the chapter completion time order: [chap1: 0.23, chap366: 0.10, chap366: 0.77, chap75: 0.85].
  • the word timeline information further includes assignment information of each time point information of the key word in the intermediate work.
  • the assignment information includes frequency information of the occurrence of the key phrase at this time point.
  • the device 1 determines that the chapters in which the keyword w appears in the novel A are chapters 1, 5...
  • the corresponding word timeline can be: [chap1:3,chap5:1,...chapn:m].
  • the method further includes: step S14 (not shown), the device 1 obtains the attention point information of the user participating in the authoring process on the intermediate work; step S15 (not shown), the device 1 Determine one or more target key words matching the point of interest information from the one or more key words; step S16 (not shown), the device 1 according to each of the one or more target key words The word timeline information corresponding to the target key word generates content timeline information of the point of interest information in the intermediate work.
  • the device 1 obtains the attention point information on the intermediate work of the users participating in the creation process.
  • the focus information includes content that the creator of the work pays attention to during the creation process and wishes to sort out to obtain contextual information in the work, for example, a certain character, a certain item, etc. in the work.
  • the device 1 can help the user to sort out the content of the works concerned, so as to facilitate the user to review the previous article and sort out the creative ideas.
  • the point of interest information may be acquired according to a user's triggering operation.
  • the device 1 determines the point of interest information based on the text information selected by the user's click, long press or gesture; for another example, the device 1 collects the user's voice information through a microphone, and determines the point of interest information corresponding to the voice information through voice recognition.
  • the point of interest information may also be determined by the device 1 according to the user's current creation content.
  • the device 1 determines the corresponding point of interest information according to the user's current input content; for another example, the device 1 determines the corresponding point of interest information according to the user's current writing chapter information (for example, chapter name information); for example, the device 1 determines the corresponding point of interest information according to the latest writing of the novel.
  • the completed one or more chapters determine the corresponding focus information.
  • the device 1 obtains a plurality of second words corresponding to the one or more chapters through word segmentation processing. According to the frequency of occurrence in the one or more chapters exceeding One or more second words of a predetermined frequency threshold or one or more second words with the highest frequency in the one or more chapters determine corresponding point of interest information.
  • the device 1 determines one or more target key words matching the point of interest information from the one or more key words.
  • the device 1 may obtain one or more attention words corresponding to the point of interest information through word segmentation processing, and determine one or more attention words that match the one or more attention words from the one or more key words. Multiple target keywords.
  • the device 1 may determine the target scene information corresponding to the attention point information according to the attention word or by performing semantic analysis on the attention point information, and then determine the target scene information that matches the target scene information. target keyword.
  • the step S15 includes: determining target scene information corresponding to the point of interest information; One or more target key words matching the point information, wherein the scene label information corresponding to each target key word matches the target scene information.
  • the device 1 performs semantic analysis on the keyword or the paragraph in which the keyword is located, and matches and determines the scene tag information corresponding to the keyword in the tag information database. The device 1 determines the scene tag information matching the target scene information according to the target scene information corresponding to the point of interest information, and then determines the corresponding target keyword according to the scene tag information matching the target scene information.
  • step S16 the device 1 generates the content timeline information of the point of interest information in the intermediate work according to the word timeline information corresponding to each target keyword in the one or more target keywords.
  • the device 1 obtains target word timeline information corresponding to each target keyword in the intermediate work according to the one or more target keyword queries, and combines the timeline information of each target word to generate the corresponding point of interest information content timeline information.
  • the target word timeline information corresponding to the point of interest information acquired by the device 1 is shown in Table 1.
  • Device 1 combines the information of the same time points according to the target word timeline information, and generates corresponding content timeline information: [chap1:3, chap10:4, chap17:1, chap188:6, chap256:2, chap344 :4, chap598:4, chap660:1].
  • Table 1 Example table of target word timeline
  • the step S16 includes: step S161 (not shown), the device 1 acquires one or more target word timeline information corresponding to the one or more target key words, wherein each target word time The line information corresponds to a target keyword in the one or more target keywords; in step S162 (not shown), the device 1 combines the timeline information of the one or more target words according to the time dimension to obtain the The content timeline information of the point of interest information in the intermediate work, wherein the content timeline information includes one or more time period information, and each time period information includes at least one of the target word timeline information. A point in time information.
  • the device 1 may determine one or more target word timeline information corresponding to the point of interest information in the word timeline information database corresponding to the intermediate work, wherein each target word timeline information corresponds to A target keyword among the one or more target keyword. In some embodiments, the device 1 combines the same or similar time point information (eg, different progress of the same chapter) in the timeline information of each target word to obtain content timeline information corresponding to the point of interest information.
  • the device 1 determines that the timeline information of the target keyword w1 corresponding to the point of interest information is [chap1:0.1:3, chap17:0.7:1, chap256:0.4:2, chap598:0.2:1], the target keyword
  • the timeline information of w2 is [chap1: 0.8: 1, chap256: 0.4: 1, chap660: 0.4: 1]
  • device 1 can compare "chap256: 0.4: 2" in the timeline information of w1 with the timeline information of w2 "chap256:0.4:1" at the same time point is merged to obtain the content timeline information corresponding to the point of interest information: [chap1:0.1:3, chap1:0.8:1, chap17:0.7:1, chap256:0.4:3, chap598:0.2:1, chap660:0.4:1].
  • the device 1 may also combine "chap1:0.1:3" in the timeline information of w1 with "chap1:0.8:1" in the timeline information of w2 at a similar time point (for example, the same chapter) to obtain Content timeline information corresponding to the point of interest information: [chap1:4, chap17:1, chap256:3, chap598:1, chap660:1].
  • the content timeline information further includes assignment information of the point of interest information in each time period information.
  • the assignment information of the point of interest information in each time period information includes the sum of assignment information corresponding to all the time point information in the time period information.
  • the timeline information of the target keyword w1 [chap1:0.1:3, chap17:0.7:1, chap256:0.4:2, chap598:0.2:1]
  • the timeline information of the target keyword w2 [chap1: 0.8:1, chap256:0.4:1, chap660:0.4:1]
  • the frequency information "4" corresponding to the time period information "chap1:4" is the sum of the occurrence frequencies of the target keywords w1 and w2 in Chapter 1.
  • the device 1 may determine the key time period information in the content timeline information according to the assignment information, for example, determine the time period information whose corresponding assignment information is higher than a preset assignment information threshold as the key time period information Time period information, and the corresponding work content is preferentially displayed to the user or only the work content corresponding to the key time period information is displayed to the user to help the user quickly obtain the key work content corresponding to the content of interest.
  • the step S162 includes: the device 1 performs a clustering process on the multiple time point information in the one or more target word timeline information, so as to obtain one or more clusters, wherein each The cluster includes at least one time point information in the timeline information of the one or more target words; the content timeline information of the point of interest information in the intermediate work is generated according to the one or more clusters, wherein each Each cluster corresponds to one time period information in the content timeline information.
  • the device 1 determines the time period corresponding to the cluster according to the time point information corresponding to the cluster boundary of the cluster.
  • the device 1 determines that the cluster boundary of a cluster is the time point information "chap1: 0.5" of the target keyword w1 and the time point information "chap2: 0.5" of the target keyword w3, it can determine all the targets in the cluster
  • the time point information corresponding to the keywords is between 50% of chapter 1 and 50% of chapter 2, and the time period information corresponding to this cluster is "chap1:0.5:chap2:0.5".
  • the content timeline information of the point of interest information in the intermediate work is generated according to the time period information corresponding to all the clusters.
  • the assignment information of each time period information of the point of interest information in the content timeline information is determined based on each time point information in the cluster corresponding to the time period information and its corresponding target keyword .
  • the assignment information of the time period information includes frequency information of the occurrence frequency of the target keyword corresponding to the point of interest information within the time period. The assignment information of the time period information may be determined according to the time point information of all target key words included in each cluster.
  • device 1 determines that a cluster contains 2 time point information of the target keyword w1: “chap1:0.5:1", “chap2:0.1:2", and 1 time point information of the target keyword w3: “chap2 : 0.5: 1", then it can be determined that the assignment information of the corresponding time period information is "4" according to the assignment information "1", "2", and "1" corresponding to the three time point information.
  • the key word includes a character object in the intermediate work
  • the word timeline information corresponding to the character object includes character timeline information of the character object in the intermediate work
  • the The character timeline information includes one or more time point information of the character object in the intermediate work.
  • the device 1 performs character name recognition on the intermediate work, and determines one or more character objects in the intermediate work. For example, the probability value of a certain word as a name component is trained for the name corpus, and it is used to calculate the probability of a candidate field in the intermediate work as a name, and the field with a probability higher than a predetermined probability threshold is used as the recognized person name.
  • the device 1 generates the character timeline information of the character object in the intermediate work according to the determined character object and the position information of the character object in the intermediate work.
  • the method further includes step S17 (not shown), the device 1 acquires multiple target character timeline information corresponding to multiple target character objects in the intermediate work, wherein each target character timeline The information corresponds to one target character object among the multiple target character objects; the character-related timeline information of the multiple target character objects in the intermediate work is generated according to the multiple target character timeline information, wherein the The character-related timeline information includes one or more time period information, and each time period information includes at least one time point information in at least one target person timeline information.
  • the device 1 determines the target person object it is concerned about according to user input or text information selected by the user through operations such as clicks, long presses, or gestures, or the device 1 collects the user's voice information through a microphone, and recognizes it through voice recognition. Determine the target person object corresponding to the voice information.
  • the target person object may also be determined by the device 1 according to the user's current creation content.
  • the device 1 determines the target character timeline information corresponding to the target character object in the word timeline information database corresponding to the intermediate work, and merges it according to the time dimension according to the multiple target character timeline information generating character-related timeline information of the plurality of target character objects in the intermediate work.
  • the method for determining the target person object is the same or basically the same as the method for determining the point of interest information in the aforementioned step S14
  • the method for generating the person-related timeline information is the same as the method for generating the content timeline information in the aforementioned step S16. or are basically the same, so they are not repeated here, and are incorporated herein by reference.
  • the method further includes step S18 (not shown), the device 1 generates the character relationship graph information of the multiple target character objects in the intermediate work according to the character association timeline information.
  • the device 1 determines the character relationship information of the multiple target character objects according to the work content information corresponding to the time period information included in the character association timeline information, and according to the one or more character relationship information Generate character relationship graph information of the multiple target character objects in the intermediate work.
  • device 1 determines the character-related timeline information [chap1, chap17, chap256, chap660 according to the target character timeline information corresponding to the target character objects "Zhang San”, “Li Si”, and “Wang Wu” ], the device 1 obtains the work content information corresponding to the four time periods in the associated timeline information of the character, in which "Zhang San” appears in chap1, chap17 and chap256, "Li Si” appears in chap17 and chap256, and “Wang Wu” appears in chap17 and chap256. Appears in chap1 and chap660.
  • the device 1 can obtain the text keywords of the work content information corresponding to the information of each time period through algorithms such as term frequency-inverse document frequency (TF-IDF), text ranking (TextRank), and then according to the text keywords.
  • TF-IDF term frequency-inverse document frequency
  • TextRank text ranking
  • Determine character relationship information For example, if device 1 determines that the text keyword of Chapter 1 is "Shanghai”, it can determine that "Zhang San” and “Wang Wu” are related through “Shanghai”, and the text keyword “Shanghai” can be used as the target character object "Zhang San” "The character relationship information with "Wang Wu” is added to the character relationship graph information.
  • Table 2 Example table of target person objects and text keywords
  • the step S18 further includes: the device 1 predicts the future character relation graph of the multiple target character objects in the intermediate work according to the character relation graph information and the character association timeline information information.
  • the device 1 predicts the future character relationships of the multiple target characters in the intermediate work according to a preset character object development template and in combination with the character relationship graph information and the character association timeline information Graph information.
  • the device 1 generates future character relationship information of the target character object that does not include the character relationship information according to the character relationship information of the target character object in the character relationship map information, and generates the multiple target characters according to the future character relationship information.
  • the future character relationship graph information of the object in the intermediate work is not include the character relationship information according to the character relationship information of the target character object in the character relationship map information.
  • the character relationship information of the target character objects "Zhang San” and “Li Si” includes “Train Station", while the target character objects “Zhang San” and “Wang Wu” do not contain the character relationship information" TRAIN STATION".
  • Device 1 can use "train station” as the future character relationship information of the target character objects "Zhang San” and “Wang Wu”, and then generate future character relationship graph information, and the future character relationship graph information may also include the character relationship information "" The text information corresponding to the time period information to which the "train station" belongs.
  • predicting the future character relation graph information through the character relation graph information and the character association timeline information can provide the author with a reference for the future creation direction of the work and improve the author's writing efficiency.
  • the method further includes step S19 (not shown), the device 1 detects whether the intermediate work satisfies the creation assistance trigger condition during the creation process of the intermediate work; if so, determines the creation assistance trigger One or more auxiliary key words corresponding to the conditions, and the creation reference information corresponding to the intermediate work is provided according to the word timeline information corresponding to the auxiliary key words.
  • the authoring assistance trigger condition includes at least any one of the following: the user's text or voice input rate is less than or equal to a predetermined text or voice input rate threshold; the user does not input text or voice for a time greater than or equal to a predetermined time Threshold; the user's text or voice input rate drop value is greater than or equal to the predetermined rate drop threshold value within a predetermined time period; the user searches in the intermediate works; the number of words deleted by the user at one time is greater than or equal to the predetermined deleted word count threshold; The number of times the text is deleted within the predetermined time period is greater than or equal to the predetermined threshold for the number of deletions; the duration of the voice deleted by the user at one time is greater than or equal to the predetermined threshold for the deletion duration.
  • the device 1 can provide the user with corresponding authoring reference information, so as to help the user to create and improve the user's writing efficiency.
  • the user can also actively trigger this function by searching in the intermediate works, so that the device 1 can provide help to the user in time.
  • the auxiliary key words include but are not limited to words currently written by the user, key words corresponding to one or more recent sentences of the user's current creation content, key words used by the user when searching, or the user deletes text or The key words corresponding to the voice.
  • the device 1 determines the corresponding word timeline information according to the auxiliary key word query, and provides the creation reference information corresponding to the intermediate work accordingly.
  • the providing the authoring reference information corresponding to the intermediate text according to the word timeline information corresponding to the auxiliary key words includes: device 1 adding at least one auxiliary key in the one or more auxiliary key words
  • the word timeline information corresponding to the words is used as the creation reference information corresponding to the intermediate text, and is provided to the users participating in the creation process; or, at least two auxiliary key words in the one or more auxiliary key words are respectively corresponding to
  • the word timeline information of the document is merged according to the time dimension, and the merged timeline information is provided as the creation reference information corresponding to the intermediate work to the users participating in the creation process.
  • the word timeline information corresponding to the one or more auxiliary key words may be provided individually or combined as authoring reference information to the user for reference.
  • the device 1 can use the word timeline information whose assignment information of the word timeline information corresponding to the auxiliary key word is greater than or equal to the first assignment information threshold as the creation reference information corresponding to the intermediate work; Dimension merging takes the merged timeline information as the creation reference information corresponding to the intermediate work, and provides the two types of creation reference information to the user for reference. Also for example, the device 1 can generate the creation reference timeline information according to the time point information whose assignment information corresponding to the time point information in the word timeline information corresponding to all the auxiliary key words is greater than or equal to the second assignment information threshold, and use it as the intermediate work. Corresponding authoring reference information is provided to users participating in the authoring process.
  • the method of combining the word timeline information is the same or basically the same as the method for generating content timeline information in the aforementioned step S16, and the method for generating the creation reference timeline information according to the time point information is the same as that for generating the words in the aforementioned step S13.
  • the method of the time line information is the same or basically the same, so it is not repeated here, and is incorporated herein by reference.
  • FIG. 2 shows a structure diagram of a device for generating context information of created content in an authoring process according to an embodiment of the present application.
  • the device 1 includes a first module 11 , a second module 12 , and a third module 13 .
  • the module 11 processes the intermediate work in the creation process to obtain one or more first words of the intermediate work;
  • the second module 12 selects the one or more first words to obtain the intermediate work One or more corresponding key words;
  • the third module 13 generates the word timeline information of the key words in the intermediate words according to the position information of the key words in the intermediate works, wherein the Word timeline information includes one or more time point information of the key word in the intermediate work.
  • the specific implementation manners are the same as or similar to the specific embodiments of the aforementioned steps S11 , S12 , and S13 respectively, so they are not repeated here, but are incorporated herein by reference.
  • the apparatus 1 further includes a four-module 14 (not shown), a five-module 15 (not shown), and a six-module 16 (not shown).
  • the four-module 14 obtains the information on the points of interest of the users participating in the creation process on the intermediate work; the five-module 15 determines from the one or more key terms one or more matching points of interest information Target keyword; the sixth module 16 generates the content timeline information of the point of interest information in the intermediate work according to the word timeline information corresponding to each target keyword in the one or more target keywords.
  • the specific implementations of the four modules 14 , the five modules 15 , and the six modules 16 are respectively the same as or similar to the specific embodiments of the aforementioned steps S14 , S15 , and S16 , so they are not repeated here, but are included by reference. here.
  • the six-six module 16 includes a six-one unit 161 (not shown) and a six-two unit 162 (not shown).
  • the 161 unit 161 acquires one or more target word timeline information corresponding to the one or more target key words, wherein each target word timeline information corresponds to a target key word in the one or more target key words Words;
  • Unit 162 combines the one or more target word timeline information according to the time dimension, so as to obtain the content timeline information of the point of interest information in the intermediate work, wherein the content timeline
  • the information includes one or more time period information, and each time period information includes at least one time point information in the at least one target word timeline information.
  • the specific implementations of the 161 unit 161 and the 162 unit 162 are the same as or similar to the specific embodiments of the foregoing steps S161 and S162 respectively, so they are not repeated here, but are incorporated herein by reference.
  • the apparatus 1 further includes a seven module 17 (not shown).
  • Module 17 acquires multiple target character timeline information corresponding to multiple target character objects in the intermediate work, wherein each target character timeline information corresponds to one target character object in the multiple target character objects; according to The multiple target character timeline information generates the character associated timeline information of the multiple target character objects in the intermediate work, wherein the character associated timeline information includes one or more time period information, each The time period information includes at least one time point information in the at least one target person timeline information.
  • the specific implementation of the one-seven module 17 is the same as or similar to the specific embodiment of the aforementioned step S17, so it will not be repeated here, and it is incorporated herein by reference.
  • the apparatus 1 further includes eight modules 18 (not shown).
  • the eighteenth module 18 generates the character relationship graph information of the multiple target character objects in the intermediate work according to the character association timeline information.
  • the specific implementation of the eighteen modules 18 is the same as or similar to the specific embodiment of the aforementioned step S18 , so it is not repeated here, and is incorporated herein by reference.
  • the apparatus 1 further includes a 19 module 19 (not shown).
  • the 19th module 19 detects whether the intermediate work satisfies the creation auxiliary trigger condition in the creation process of the intermediate work;
  • the word timeline information corresponding to the word provides creation reference information corresponding to the intermediate work.
  • the specific implementation of the 19 module 19 is the same as or similar to the specific embodiment of the aforementioned step S19, so it is not repeated here, and is incorporated herein by reference.
  • FIG. 3 illustrates an exemplary system that may be used to implement various embodiments described in this application
  • system 300 can function as any of the devices in each of the described embodiments.
  • system 300 may include one or more computer-readable media (eg, system memory or NVM/storage device 320 ) having instructions and be coupled to the one or more computer-readable media and configured to execute Instructions to implement a module to perform one or more processors (eg, processor(s) 305 ) to perform the actions described herein.
  • processors eg, processor(s) 305
  • system control module 310 may include any suitable interface controller to provide at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310 any appropriate interface.
  • the system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315 .
  • the memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • System memory 315 may be used, for example, to load and store data and/or instructions for system 300 .
  • system memory 315 may include any suitable volatile memory, eg, suitable DRAM.
  • system memory 315 may include double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • system control module 310 may include one or more input/output (I/O) controllers to provide interfaces to NVM/storage device 320 and communication interface(s) 325 .
  • I/O input/output
  • NVM/storage device 320 may be used to store data and/or instructions.
  • NVM/storage device 320 may include any suitable non-volatile memory (eg, flash memory) and/or may include any suitable non-volatile storage device(s) (eg, one or more hard drives ( HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • HDD hard drives
  • CD compact disc
  • DVD digital versatile disc
  • NVM/storage device 320 may include storage resources that are physically part of the device on which system 300 is installed, or it may be accessed by the device without necessarily being part of the device.
  • the NVM/storage device 320 is accessible via the communication interface(s) 325 over a network.
  • Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device.
  • System 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, the memory controller module 330 ). For one embodiment, at least one of the processor(s) 305 may be packaged with logic of one or more controllers of the system control module 310 to form a system-in-package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with the logic of one or more controllers of the system control module 310 . For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on a chip (SoC).
  • SoC system on a chip
  • system 300 may be, but is not limited to, a server, workstation, desktop computing device, or mobile computing device (eg, laptop computing device, handheld computing device, tablet computer, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores computer codes, when the computer codes are executed, as in any of the previous The described method is executed.
  • the present application also provides a computer program product, when the computer program product is executed by a computer device, the method according to any one of the preceding items is executed.
  • the present application also provides a computer device, the computer device comprising:
  • processors one or more processors
  • memory for storing one or more computer programs
  • the one or more computer programs when executed by the one or more processors, cause the one or more processors to implement the method of any preceding item.
  • the present application may be implemented in software and/or a combination of software and hardware, eg, an application specific integrated circuit (ASIC), a general purpose computer, or any other similar hardware device.
  • the software program of the present application may be executed by a processor to implement the steps or functions described above.
  • the software programs of the present application (including associated data structures) may be stored on a computer-readable recording medium, such as RAM memory, magnetic or optical drives or floppy disks, and the like.
  • some steps or functions of the present application may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
  • a part of the present application can be applied as a computer program product, such as computer program instructions, which when executed by a computer, through the operation of the computer, can invoke or provide methods and/or technical solutions according to the present application.
  • Those skilled in the art should understand that the existing forms of computer program instructions in computer-readable media include but are not limited to source files, executable files, installation package files, etc.
  • the ways in which computer program instructions are executed by a computer include but are not limited to Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding post-installation program. program.
  • the computer-readable medium can be any available computer-readable storage medium or communication medium that can be accessed by a computer.
  • Communication media includes media by which communication signals containing, for example, computer readable instructions, data structures, program modules or other data are transmitted from one system to another.
  • Communication media may include conducted transmission media such as cables and wires (eg, fiber optic, coaxial, etc.) and wireless (unconducted transmission) media capable of propagating energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules or other data may be embodied, for example, as a modulated data signal in a wireless medium such as a carrier wave or similar mechanism such as embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal whose one or more characteristics are altered or set in a manner that encodes information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • computer-readable storage media may include volatile and non-volatile, readable storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Removable and non-removable media.
  • computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory, such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or later developed capable of storing data for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or later developed capable of storing data for computer systems Computer readable information/data used.
  • an embodiment according to the present application includes an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, a trigger is
  • the apparatus operates based on the aforementioned methods and/or technical solutions according to various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种用于在创作过程中生成已创作内容的脉络信息的方法与设备,该方法包括:对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息。该方法帮助参与作品创作的用户快速、全面地梳理中间作品脉络,方便参与作品创作的用户进行信息查找,同时减少参与作品创作的用户在进行中间作品回顾和梳理时的繁琐操作。

Description

在创作过程中生成已创作内容的脉络信息的方法与设备
本申请是以CN申请号为202011545832.3,申请日为2020.12.23的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本申请涉及通信领域,尤其涉及一种用于在创作过程中生成已创作内容的脉络信息的技术。
背景技术
目前,作者在进行小说创作时,通常需要回顾前文以寻找写作思路或保证前后写作内容一致。通常,作者通过在文中进行标记或根据关键词在文中查找相应内容来辅助进行文章梳理。
发明内容
本申请的一个目的是提供一种在创作过程中生成已创作内容的脉络信息的方法与设备。
根据本申请的一个方面,提供了一种在创作过程中生成已创作内容的脉络信息的方法,该方法包括:
对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;
从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;
根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息。
根据本申请的一个方面,提供了一种用于在创作过程中生成已创作内容的脉络信息的设备,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如下操作:
对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;
从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;
根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息。
根据本申请的一个方面,提供了一种存储指令的计算机可读介质,所述指令在被执行时使得系统进行如下操作:
对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;
从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;
根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息。
根据本申请的一个方面,提供了一种用于在创作过程中生成已创作内容的脉络信息的设备,该设备包括:
一一模块,用于对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;
一二模块,用于从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;
一三模块,用于根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息。
与现有技术相比,本申请通过对创作过程中的中间作品进行分词处理获得中间作品的一个或多个第一词语,从中筛选出该中间作品对应的一个或多个关键词语,并根据关键词语在中间作品中所处的章节位置信息生成该关键词语在该中间作品中的词语时间线信息,帮助参与作品创作的用户快速、全面地梳理中间作品脉络,方便参与作品创作的用户进行信息查找,同时减少参与作品创作的用户在进行中间作品回顾和梳理时的繁琐操作。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1示出根据本申请一个实施例的一种在创作过程中生成已创作内容的脉络信息的方法流程图;
图2示出根据本申请一个实施例的一种在创作过程中生成已创作内容的脉络信息的设备结构图;
图3示出可被用于实施本申请中所述的各个实施例的示例性系统。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本申请作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如,中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash Memory)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Memory,PCM)、可编程随机存取存储器(Programmable Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本申请所指设备包括但不限于用户设备、网络设备、或用户设备与网络设备通过网 络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互(例如通过触摸板进行人机交互)的移动电子产品,例如智能手机、平板电脑等,所述移动电子产品可以采用任意操作系统,如Android操作系统、iOS操作系统等。其中,所述网络设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和信息处理的电子设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、数字信号处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云;在此,云由基于云计算(Cloud Computing)的大量计算机或网络服务器构成,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个虚拟超级计算机。所述网络包括但不限于互联网、广域网、城域网、局域网、VPN网络、无线自组织网络(Ad Hoc网络)等。优选地,所述设备还可以是运行于所述用户设备、网络设备、或用户设备与网络设备、网络设备、触摸终端或网络设备与触摸终端通过网络相集成所构成的设备上的程序。
当然,本领域技术人员应能理解上述设备仅为举例,其他现有的或今后可能出现的设备如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。
在本申请的描述中,“多个”的含义是两个或者更多,除非另有明确具体的限定。
图1示出根据本申请一个实施例的一种在创作过程中生成已创作内容的脉络信息的方法流程图,该方法包括步骤S11、步骤S12、步骤S13。在步骤S11中,设备1对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;在步骤S12中,设备1从所述一个或多个第一词语中筛选获得所述中间作品对应的多个关键词语;在步骤S13中,设备1根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息。
在步骤S11中,设备1对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语。
在一些实施例中,所述作品包括但不限于小说、剧本等文本作品以及广播剧、电影等影音作品。所述中间作品包括正在创作中尚未完成的作品,以小说为例,所 述中间作品包括但不限于未完结小说中已完成的章节文本、用户正在写作章节中已完成的文本等。在以下实施例中,除非特别说明,本申请将以创作小说等文本作品为例阐述各实施例;本领域技术人员应能理解,以下实施例也可以应用于其他类型的作品。
在一些实施例中,设备1可获取用户上传的中间作品,或根据用户正在创作的作品对应的作品标识信息查询作品数据库获取对应的中间作品。在一些实施例中,若所述中间作品为文本类型中间作品,所述处理包括对中间作品进行分词处理,以获得一个或多个第一词语。进一步地,所述处理还包括对分词处理后结果进行词性分析,以保证获得的第一词语均为名词,提高生成中间作品脉络信息的效率;又或者,所述处理还包括对分词处理后结果进行人物名称识别,以获得包含人物对象的第一词语,方便对中间作品中人物对象进行脉络信息的梳理。在一些实施例中,若所述中间作品包括影音类型中间作品,则所述处理包括:先将所述中间作品包括的音频信息转化为文本信息,再对所述文本信息进行处理,以获得一个或多个第一词语;其中,所述对所述文本信息进行处理方式与前述实施例中对文本类型中间作品的处理方式相同或相近,故不再赘述,并以引用方式包含于此。
在步骤S12中,设备1从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语。
在一些实施例中,设备1根据各第一词语在所述中间作品中的累计词频或各第一词语所属的场景标签信息筛选获得所述中间作品对应的一个或多个关键词语。例如,设备1将累计词频大于等于预定阈值的第一词语作为该中间作品对应的关键词语。或者设备1根据各第一词语所属的场景标签信息筛选与中间作品对应的作品标签信息相匹配的的一个或多个关键词语。第一词语所属的场景标签信息可根据第一词语对应的词语特征在标签信息库中匹配确定。所述标签信息库中包含有词语特征与场景标签信息的映射关系,例如包含“刀”的词语对应的场景标签信息为“武器”,包含“车”的词语对应的场景标签信息为“交通”。所述中间作品对应的作品标签信息可根据作品类型确定。不同的作品标签信息对应不同场景标签信息,例如,小说A的标签信息为“武侠”的,其匹配的场景标签信息为“武器”、“武功”等。
在步骤S13中,设备1根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息 包括所述关键词语在所述中间作品中的一个或多个时间点信息。
在一些实施例中,所述位置信息包括但不限于章节位置信息或时间进度信息。所述章节位置信息包括该关键词语所处章节的章内位置信息,例如,该关键词语处于第8章的50%处;或者,该关键词语所处章节在中间作品中的位置信息,例如,该关键词语处于第8章,该小说目前有800章,则该关键词语处于该小说1%处。所述时间进度信息包括该关键词语对应音频信息在中间作品中的播放时间信息,例如,某关键词语对应音频信息在某广播剧中3′15″处播放,则可将该关键词语位置信息记录为3′15″;或者,该关键词语对应音频信息在中间作品中的播放进度信息,例如,某关键词语对应音频信息在某广播剧中5%处播放,则可将该关键词语位置信息记录为5%。
在一些实施例中,设备1根据所述章节位置信息及中间作品中章节顺序生成关键词语在中间作品中的词语时间线信息。例如,小说A中的关键词w出现在该小说文本中第1章23%处、第75章85%处、第366章10%处、第366章77%处,则按其章节先后顺序生成w对应的词语时间线:[chap1:0.23,chap75:0.85,chap366:0.10,chap366:0.77]。
在一些实施例中,设备1根据所述章节位置信息及中间作品中各章节的完成时间顺序生成关键词语在中间作品中的词语时间线信息。例如,小说A中的关键词w出现在该小说文本中第1章23%处、第75章85%处、第366章10%处、第366章77%处,而该小说中第75章的完成时间晚于第366章,则根据章节完成时间顺序生成w对应的词语时间线:[chap1:0.23,chap366:0.10,chap366:0.77,chap75:0.85]。
在一些实施例中,所述词语时间线信息还包括所述关键词语在所述中间作品中的各时间点信息的赋值信息。所述赋值信息包括所述关键词语在该时间点出现的频次信息。例如,设备1确定小说A中关键词语w出现的章节为第1章、第5章…第n章,对应的关键词语在各章节中出现频次分别为3、1…m,则该关键词w对应的词语时间线可以为:[chap1:3,chap5:1,…chapn:m]。
在一些实施例中,所述方法还包括:步骤S14(未示出),设备1获取参与所述创作过程的用户对所述中间作品的关注点信息;步骤S15(未示出),设备1从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个目标关键 词语;步骤S16(未示出),设备1根据所述一个或多个目标关键词语中各目标关键词语对应的词语时间线信息生成所述关注点信息在所述中间作品中的内容时间线信息。
在步骤S14中,设备1获取参与所述创作过程的用户对所述中间作品的关注点信息。在一些实施例中,所述关注点信息包括作品创作者在创作过程中所关注的、希望进行梳理以获得其在作品中脉络信息的内容,例如,作品中某个人物、某个物品等。通过获取关注点信息,设备1可帮助用户对其关注的作品内容进行梳理,方便用户回顾前文、梳理创作思路。在一些实施例中,所述关注点信息可根据用户的触发操作获取。例如,设备1基于用户的点击、长按或手势等操作选择的文本信息确定关注点信息;又例如,设备1通过麦克风采集用户的语音信息,并通过语音识别确定语音信息对应的关注点信息。在一些实施例中,所述关注点信息还可由设备1根据用户当前创作内容确定。例如,设备1根据用户当前输入内容确定对应的关注点信息;又例如,设备1根据用户当前写作章节信息(例如,章节名称信息)确定对应的关注点信息;还例如,设备1根据小说的最近完成的一个或多个章节确定对应的关注点信息,比如,设备1通过分词处理获取所述一个或多个章节对应的多个第二词语,根据在所述一个或多个章节中出现频次超过预定的频次阈值的一个或多个第二词语或者在所述一个或多个章节中出现频次最高的一个或多个第二词语确定对应的关注点信息。
在此,本领域技术人员应能理解,上述关注点信息获取方法仅为举例,其他现有的或后续可能出现的可用于获取用户对中间作品的关注点信息的方法如可适用该实施例,也应包含在该实施例保护范围以内,并在此以引用方式包含于此。
在步骤S15中,设备1从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个多个目标关键词语。在一些实施例中,设备1可通过分词处理获取关注点信息对应的一个或多个关注词语,从所述一个或多个关键词语中确定与所述一个或多个关注词语相匹配的一个或多个目标关键词语。在一些实施例中,设备1可根据所述关注词语,或者通过对所述关注点信息进行语义分析确定所述关注点信息对应的目标场景信息,进而确定与所述目标场景信息相匹配的多个目标关键词语。
在一些实施例中,所述步骤S15包括:确定所述关注点信息对应的目标场景信 息;根据所述关键词语对应的场景标签信息,从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个目标关键词语,其中每个目标关键词语对应的场景标签信息与所述目标场景信息相匹配。在一些实施例中,设备1根据所述关键词语或是对所述关键词语所处段落进行语义分析,在标签信息库中匹配确定所述关键词语对应的场景标签信息。设备1根据关注点信息对应的目标场景信息确定与所述目标场景信息相匹配的场景标签信息,进而根据所述与所述目标场景信息相匹配的场景标签信息确定对应的目标关键词语。
在步骤S16中,设备1根据所述一个或多个目标关键词语中各目标关键词语对应的词语时间线信息生成所述关注点信息在所述中间作品中的内容时间线信息。在一些实施例中,设备1根据所述一个或多个目标关键词语查询获得该中间作品中各目标关键词对应的目标词语时间线信息,将每个目标词语时间线信息合并生成关注点信息对应的内容时间线信息。例如,设备1获取的关注点信息对应的目标词语时间线信息如表1所示。设备1根据所述目标词语时间线信息对其中各相同时间点信息进行合并,生成对应的内容时间线信息:[chap1:3,chap10:4,chap17:1,chap188:6,chap256:2,chap344:4,chap598:4,chap660:1]。
表1 目标词语时间线示例表
Figure PCTCN2021119605-appb-000001
在一些实施例中,所述步骤S16包括:步骤S161(未示出),设备1获取所述一个或多个目标关键词语对应的一个或多个目标词语时间线信息,其中每个目标词语时间线信息对应于所述一个或多个目标关键词语中一个目标关键词语;步骤S162(未示出),设备1将所述一个或多个目标词语时间线信息按时间维度合并,以获得所述关注点信息在所述中间作品中的内容时间线信息,其中,所述内容时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标词语时间线信息中的至少一个时间点信息。在一些实施例中,设备1可在所述中间作品对 应的词语时间线信息库中确定所述关注点信息对应的一个或多个目标词语时间线信息,其中每个目标词语时间线信息对应于所述一个或多个目标关键词语中一个目标关键词语。在一些实施例中,设备1将各目标词语时间线信息中相同或相近(例如,同一章节不同进度)的时间点信息进行合并,获得所述关注点信息对应的内容时间线信息。例如,设备1确定所述关注点信息对应的目标关键词语w1的时间线信息为[chap1:0.1:3,chap17:0.7:1,chap256:0.4:2,chap598:0.2:1],目标关键词语w2的时间线信息为[chap1:0.8:1,chap256:0.4:1,chap660:0.4:1],设备1可将w1的时间线信息中“chap256:0.4:2”与w2的时间线信息中处于相同时间点的“chap256:0.4:1”合并,得到关注点信息对应的内容时间线信息:[chap1:0.1:3,chap1:0.8:1,chap17:0.7:1,chap256:0.4:3,chap598:0.2:1,chap660:0.4:1]。进一步地,设备1还可以将w1的时间线信息中“chap1:0.1:3”与w2的时间线信息中处于相近时间点(例如,相同章节)的“chap1:0.8:1”进行合并,得到关注点信息对应的内容时间线信息:[chap1:4,chap17:1,chap256:3,chap598:1,chap660:1]。
在一些实施例中,所述内容时间线信息还包括所述关注点信息在各个时间段信息的赋值信息。在一些实施例中,所述关注点信息在各个时间段信息的赋值信息包括所述时间段信息中所有时间点信息对应的赋值信息之和。例如,根据目标关键词语w1的时间线信息:[chap1:0.1:3,chap17:0.7:1,chap256:0.4:2,chap598:0.2:1],目标关键词语w2的时间线信息:[chap1:0.8:1,chap256:0.4:1,chap660:0.4:1]确定关注点信息对应的内容时间线信息:[chap1:4,chap17:1,chap256:3,chap598:1,chap660:1],则时间段信息“chap1:4”对应的频次信息“4”为目标关键词语w1和w2在第1章出现频次之和。在一些实施例中,设备1可根据所述赋值信息确定所述内容时间线信息中的重点时间段信息,例如,将对应的赋值信息高于预设的赋值信息阈值的时间段信息确定为重点时间段信息,并将其对应的作品内容优先展示给用户或仅将重点时间段信息对应的作品内容展示给用户,帮助用户迅速获取其感兴趣内容对应的重点作品内容。
在一些实施例中,所述步骤S162包括:设备1对所述一个或多个目标词语时间线信息中的多个时间点信息进行聚类处理,以获得一个或多个聚类,其中每个聚类包含所述一个或多个目标词语时间线信息中至少一个时间点信息;根据所述一个 或多个聚类生成所述关注点信息在所述中间作品中的内容时间线信息,其中每个聚类对应于所述内容时间线信息中的一个时间段信息。在一些实施例中,设备1根据所述聚类的聚类边界对应的时间点信息确定所述聚类对应的时间段。例如,设备1确定一个聚类的聚类边界为目标关键词w1的时间点信息“chap1:0.5”和目标关键词w3的时间点信息“chap2:0.5”,则可确定该聚类中所有目标关键词对应的时间点信息均处于第1章50%至第2章50%之间,该聚类对应的时间段信息为“chap1:0.5:chap2:0.5”。根据所有聚类对应的时间段信息生成所述关注点信息在所述中间作品中的内容时间线信息。
在一些实施例中,所述关注点信息在所述内容时间线信息中各个时间段信息的赋值信息是基于该时间段信息对应的聚类中各时间点信息及其对应的目标关键词语确定的。在一些实施例中,所述时间段信息的赋值信息包括所述关注点信息对应的目标关键词语在所述时间段内出现的频次信息。所述时间段信息的赋值信息可根据各聚类中包含的所有目标关键词语的时间点信息确定。例如,设备1确定一个聚类中包含目标关键词语w1的2个时间点信息:“chap1:0.5:1”、“chap2:0.1:2”,目标关键词语w3的1个时间点信息:“chap2:0.5:1”,则可根据这3个时间点信息对应的赋值信息“1”、“2”、“1”确定其对应的时间段信息的赋值信息为“4”。
在一些实施例中,所述关键词语包括所述中间作品中的人物对象,所述人物对象对应的词语时间线信息包括所述人物对象在所述中间作品中的人物时间线信息,其中所述人物时间线信息包括所述人物对象在所述中间作品中的一个或多个时间点信息。在一些实施例中,设备1对所述中间作品进行人物名称识别,确定所述中间作品中的一个或多个人物对象。例如,针对姓名语料库来训练某个字作为姓名组成部分的概率值,并用其来计算中间作品中候选字段作为姓名的概率,将概率高于预定概率阈值的字段作为识别出的人物名称。在此,本领域技术人员应能理解,上述人物名称识别方法仅为举例,其他现有的或后续可能出现的可用于人物名称识别的方法如可适用该实施例,也应包含在该实施例保护范围以内,并在此以引用方式包含于此。在一些实施例中,设备1根据确定的人物对象及所述人物对象在所述中间作品中所处的位置信息生成所述人物对象在所述中间作品中的人物时间线信息。
在一些实施例中,所述方法还包括步骤S17(未示出),设备1获取所述中间 作品中的多个目标人物对象对应的多个目标人物时间线信息,其中每个目标人物时间线信息对应于所述多个目标人物对象中一个目标人物对象;根据所述多个目标人物时间线信息生成所述多个目标人物对象在所述中间作品中的人物关联时间线信息,其中,所述人物关联时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标人物时间线信息中的至少一个时间点信息。在一些实施例中,设备1根据用户输入或用户通过点击、长按或手势等操作选择的文本信息确定其所关注的目标人物对象,或者设备1通过麦克风采集用户的语音信息,并通过语音识别确定语音信息对应的目标人物对象。在一些实施例中,所述目标人物对象还可由设备1根据用户当前创作内容确定。在一些实施例中,设备1在所述中间作品对应的词语时间线信息库中确定所述目标人物对象对应的目标人物时间线信息,并根据所述多个目标人物时间线信息按时间维度合并生成所述多个目标人物对象在所述中间作品中的人物关联时间线信息。或者根据所述多个目标人物时间线信息中的多个时间点信息进行聚类处理确定所述多个目标人物对象在所述中间作品中的人物关联时间线信息。在此,所述确定目标人物对象的方法与前述步骤S14中确定关注点信息的方法相同或基本相同,所述生成人物关联时间线信息的方式与前述步骤S16中生成内容时间线信息的方法相同或基本相同,故不再赘述,并在此以引用方式包含于此。
在一些实施例中,所述方法还包括步骤S18(未示出),设备1根据所述人物关联时间线信息生成所述多个目标人物对象在所述中间作品中的人物关系图谱信息。在一些实施例中,设备1根据所述人物关联时间线信息包含的时间段信息对应的作品内容信息确定所述多个目标人物对象的人物关系信息,并根据所述一个或多个人物关系信息生成所述所述多个目标人物对象在所述中间作品中的人物关系图谱信息。例如,如表2所示,设备1根据目标人物对象“张三”、“李四”、“王五”对应的目标人物时间线信息确定了人物关联时间线信息[chap1,chap17,chap256,chap660],设备1获取该人物关联时间线信息中4个时间段信息对应的作品内容信息,其中“张三”出现在chap1、chap17和chap256,“李四”出现在chap17和chap256,“王五”出现在chap1和chap660。设备1可通过词频-逆文本频率方法(term frequency–inverse document frequency,TF-IDF)、文本排序(TextRank)等算法获得各时间段信息对应的作品内容信息的文本关键词,进而根 据文本关键词确定人物关系信息。例如,设备1确定第1章的文本关键词是“上海”,则可确定“张三”与“王五”通过“上海”关联,可将文本关键词“上海”作为目标人物对象“张三”与“王五”的人物关系信息添加至人物关系图谱信息中。
表2 目标人物对象与文本关键词示例表
Figure PCTCN2021119605-appb-000002
在一些实施例中,所述步骤S18还包括:设备1根据所述人物关系图谱信息及所述人物关联时间线信息,预测所述多个目标人物对象在所述中间作品中的未来人物关系图谱信息。在一些实施例中,设备1根据预设的人物对象发展模板,结合所述人物关系图谱信息及所述人物关联时间线信息,预测所述多个目标人物在所述中间作品中的未来人物关系图谱信息。或者,设备1根据所述人物关系图谱信息中目标人物对象的人物关系信息生成未包含该人物关系信息的目标人物对象的未来人物关系信息,根据所述未来人物关系信息生成所述多个目标人物对象在所述中间作品中的未来人物关系图谱信息。例如,参考表2可知,目标人物对象“张三”与“李四”的人物关系信息包括“火车站”,而目标人物对象“张三”与“王五”并不包含该人物关系信息“火车站”。设备1可将“火车站”作为目标人物对象“张三”与“王五”的未来人物关系信息,进而生成未来人物关系图谱信息,所述未来人物关系图谱信息还可包括该人物关系信息“火车站”所属的时间段信息对应的文本信息。在该实施例中,通过人物关系图谱信息及人物关联时间线信息预测未来人物关系图谱信息,可为作者提供该作品未来创作方向的参考,提高作者的写作效率。
在一些实施例中,所述方法还包括步骤S19(未示出),设备1在所述中间作品的创作过程中检测所述中间作品是否满足创作辅助触发条件;若是,确定所述创作辅助触发条件对应的一个或多个辅助关键词语,并根据所述辅助关键词语对应的词语时间线信息提供所述中间作品对应的创作参考信息。
在一些实施例中,所述创作辅助触发条件包括以下至少任一项:用户文字或语 音输入速率小于或等于预定的文字或语音输入速率阈值;用户未输入文字或语音时间大于或等于预定的时间阈值;在预定的时间段内用户文字或语音输入速率下降值大于或等于预定的速率下降阈值;用户在中间作品中进行搜索;用户一次删除文本的字数大于或等于预定的删除字数阈值;用户在预定的时间段内删除文本的次数大于或等于预定的删除次数阈值;用户一次删除的语音时长大于或等于预定的删除时长阈值。例如,当检测到用户长时间未输入文字或语音或是文字或语音输入速率小于或等于预定的文字或语音输入速率阈值或是其文字或语音输入速率下降过快时,可认为该用户在进行创作时缺乏灵感,从而导致其创作效率下降。又例如,当检测到用户对其当前创作内容进行大范围的删除或频繁删改其创作内容,可认为该用户对当前创作内容不满意或是不知道如何对其想创作内容进行准确表达。当检测到这些情况时,设备1均可向用户提供相应的创作参考信息,以帮助用户进行创作,提高用户写作效率。此外,用户还可通过在中间作品中进行搜索来主动触发该功能,从而使设备1可及时为用户提供帮助。
在一些实施例中,所述辅助关键词语包括但不限于用户当前撰写的词语、用户当前创作内容最近的一个或多个句子对应的关键词语、用户进行搜索时使用的关键词语或者用户删除文本或语音对应的关键词语。设备1根据所述辅助关键词语查询确定其对应的词语时间线信息,并据此提供中间作品对应的创作参考信息。
在一些实施例中,所述根据所述辅助关键词语对应的词语时间线信息提供所述中间文本对应的创作参考信息,包括:设备1将所述一个或多个辅助关键词语中至少一个辅助关键词语对应的词语时间线信息作为所述中间文本对应的创作参考信息,并提供给参与所述创作过程的用户;或者,将所述一个或多个辅助关键词语中至少两个辅助关键词语分别对应的词语时间线信息按时间维度合并,并将合并后的时间线信息作为所述中间作品对应的创作参考信息提供给参与所述创作过程的用户。例如,所述一个或多个辅助关键词语对应的词语时间线信息可单独也可合并作为创作参考信息提供给用户进行参考。又例如,设备1可将辅助关键词语对应的词语时间线信息的赋值信息大于或等于第一赋值信息阈值的词语时间线信息作为中间作品对应的创作参考信息;并将其他词语时间线信息按时间维度合并,将合并后的时间线信息作为中间作品对应的创作参考信息,将这两类创作参考信息提供给用户进行参考。还例如,设备1可根据所有辅助关键词语对应的词语时间线信息中时 间点信息对应的赋值信息大于或等于第二赋值信息阈值的时间点信息生成创作参考时间线信息,并将其作为中间作品对应的创作参考信息提供给参与创作过程的用户。
在此,所述词语时间线信息的合并方式与前述步骤S16中生成内容时间线信息的方法相同或基本相同、所述根据时间点信息生成创作参考时间线信息的方法与前述步骤S13中生成词语时间线信息的方法相同或基本相同,故不再赘述,并在此以引用方式包含于此。
图2示出根据本申请一个实施例的一种在创作过程中生成已创作内容的脉络信息的设备结构图,所述设备1包括一一模块11、一二模块12、一三模块13。一一模块11对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;一二模块12从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;一三模块13根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间词语中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息.在此,所述图2示出的一一模块11、一二模块12、一三模块13对应的具体实施方式分别与前述步骤S11、步骤S12、步骤S13的具体实施例相同或相近,故不再赘述,以引用方式包含于此。
在一些实施例中,所述设备1还包括一四模块14(未示出)、一五模块15(未示出)、一六模块16(未示出)。一四模块14获取参与所述创作过程的用户对所述中间作品的关注点信息;一五模块15从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个目标关键词语;一六模块16根据所述一个或多个目标关键词语中各目标关键词语对应的词语时间线信息生成所述关注点信息在所述中间作品中的内容时间线信息。在此,该一四模块14、一五模块15、一六模块16的具体实施方式分别与前述步骤S14、步骤S15、步骤S16的具体实施例相同或相近,故不再赘述,以引用方式包含于此。
在一些实施例中,所述一六模块16包括一六一单元161(未示出)和一六二单元162(未示出)。一六一单元161获取所述一个或多个目标关键词语对应的一个或多个目标词语时间线信息,其中每个目标词语时间线信息对应于所述一个或多个目标关键词语中一个目标关键词语;一六二单元162将所述一个或多个目标词语 时间线信息按时间维度合并,以获得所述关注点信息在所述中间作品中的内容时间线信息,其中,所述内容时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标词语时间线信息中的至少一个时间点信息。在此,该一六一单元161和一六二单元162的具体实施方式分别与前述步骤S161和步骤S162的具体实施例相同或相近,故不再赘述,以引用方式包含于此。
在一些实施例中,所述设备1还包括一七模块17(未示出)。一七模块17获取所述中间作品中的多个目标人物对象对应的多个目标人物时间线信息,其中每个目标人物时间线信息对应于所述多个目标人物对象中一个目标人物对象;根据所述多个目标人物时间线信息生成所述多个目标人物对象在所述中间作品中的人物关联时间线信息,其中,所述人物关联时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标人物时间线信息中的至少一个时间点信息。在此,该一七模块17的具体实施方式与前述步骤S17的具体实施例相同或相近,故不再赘述,以引用方式包含于此。
在一些实施例中,所述设备1还包括一八模块18(未示出)。一八模块18根据所述人物关联时间线信息生成所述多个目标人物对象在所述中间作品中的人物关系图谱信息。在此,该一八模块18的具体实施方式与前述步骤S18的具体实施例相同或相近,故不再赘述,以引用方式包含于此。
在一些实施例中,所述设备1还包括一九模块19(未示出)。一九模块19在所述中间作品的创作过程中检测所述中间作品是否满足创作辅助触发条件;若是,确定所述创作辅助触发条件对应的一个或多个辅助关键词语,并根据所述辅助关键词语对应的词语时间线信息提供所述中间作品对应的创作参考信息。在此,该一九模块19的具体实施方式与前述步骤S19的具体实施例相同或相近,故不再赘述,以引用方式包含于此。
图3示出了可被用于实施本申请中所述的各个实施例的示例性系统;
如图3所示在一些实施例中,系统300能够作为各所述实施例中的任意一个设备。在一些实施例中,系统300可包括具有指令的一个或多个计算机可读介质(例如,系统存储器或NVM/存储设备320)以及与该一个或多个计算机可读介质耦合并被配置为执行指令以实现模块从而执行本申请中所述的动作的一个或多个处理器(例如,(一个或多个)处理器305)。
对于一个实施例,系统控制模块310可包括任意适当的接口控制器,以向(一个或多个)处理器305中的至少一个和/或与系统控制模块310通信的任意适当的设备或组件提供任意适当的接口。
系统控制模块310可包括存储器控制器模块330,以向系统存储器315提供接口。存储器控制器模块330可以是硬件模块、软件模块和/或固件模块。
系统存储器315可被用于例如为系统300加载和存储数据和/或指令。对于一个实施例,系统存储器315可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,系统存储器315可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。
对于一个实施例,系统控制模块310可包括一个或多个输入/输出(I/O)控制器,以向NVM/存储设备320及(一个或多个)通信接口325提供接口。
例如,NVM/存储设备320可被用于存储数据和/或指令。NVM/存储设备320可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。
NVM/存储设备320可包括在物理上作为系统300被安装在其上的设备的一部分的存储资源,或者其可被该设备访问而不必作为该设备的一部分。例如,NVM/存储设备320可通过网络经由(一个或多个)通信接口325进行访问。
(一个或多个)通信接口325可为系统300提供接口以通过一个或多个网络和/或与任意其他适当的设备通信。系统300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信。
对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器(例如,存储器控制器模块330)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。
在各个实施例中,系统300可以但不限于是:服务器、工作站、台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)。在各个实施例中,系统300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,系统300包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。
除上述各实施例介绍的方法和设备外,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机代码,当所述计算机代码被执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储器,用于存储一个或多个计算机程序;
当所述一个或多个计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一项所述的方法。
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。本领域技术人员应能理解,计算机程序指令在计算机可读介质中的存在形式包括但不限于源文件、可执行文件、安装包文件等,相应地,计算机程序指令被计算机执行的方式包括但不限于:该计算机直接执行该指令,或者该计算机编译该指令后再执行对应的编译后程序,或者该计算机读取并执行该指令,或者该计 算机读取并安装该指令后再执行对应的安装后程序。在此,计算机可读介质可以是可供计算机访问的任意可用的计算机可读存储介质或通信介质。
通信介质包括藉此包含例如计算机可读指令、数据结构、程序模块或其他数据的通信信号被从一个系统传送到另一系统的介质。通信介质可包括有导的传输介质(诸如电缆和线(例如,光纤、同轴等))和能传播能量波的无线(未有导的传输)介质,诸如声音、电磁、RF、微波和红外。计算机可读指令、数据结构、程序模块或其他数据可被体现为例如无线介质(诸如载波或诸如被体现为扩展频谱技术的一部分的类似机制)中的已调制数据信号。术语“已调制数据信号”指的是其一个或多个特征以在信号中编码信息的方式被更改或设定的信号。调制可以是模拟的、数字的或混合调制技术。
作为示例而非限制,计算机可读存储介质可包括以用于存储诸如计算机可读指令、数据结构、程序模块或其它数据的信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动的介质。例如,计算机可读存储介质包括,但不限于,易失性存储器,诸如随机存储器(RAM,DRAM,SRAM);以及非易失性存储器,诸如闪存、各种只读存储器(ROM,PROM,EPROM,EEPROM)、磁性和铁磁/铁电存储器(MRAM,FeRAM);以及磁性和光学存储设备(硬盘、磁带、CD、DVD);或其它现在已知的介质或今后开发的能够存储供计算机系统使用的计算机可读信息/数据。
在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的 顺序。

Claims (15)

  1. 一种用于在创作过程中生成已创作内容的脉络信息的方法,其中,所述方法包括:
    对创作过程中的中间作品进行处理,以获得所述中间作品的一个或多个第一词语;
    从所述一个或多个第一词语中筛选获得所述中间作品对应的一个或多个关键词语;
    根据所述关键词语在所述中间作品中所处的位置信息,生成所述关键词语在所述中间作品中的词语时间线信息,其中所述词语时间线信息包括所述关键词语在所述中间作品中的一个或多个时间点信息。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取参与所述创作过程的用户对所述中间作品的关注点信息;
    从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个目标关键词语;
    根据所述一个或多个目标关键词语中各目标关键词语对应的词语时间线信息生成所述关注点信息在所述中间作品中的内容时间线信息。
  3. 根据权利要求2所述的方法,其中,所述根据所述一个或多个目标关键词语中各目标关键词语对应的词语时间线信息生成所述关注点信息在所述中间作品中的内容时间线信息包括:
    获取所述一个或多个目标关键词语对应的一个或多个目标词语时间线信息,其中每个目标词语时间线信息对应于所述一个或多个目标关键词语中一个目标关键词语;
    将所述一个或多个目标词语时间线信息按时间维度合并,以获得所述关注点信息在所述中间作品中的内容时间线信息,其中,所述内容时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标词语时间线信息中的至少一个时间点信息。
  4. 根据权利要求3所述的方法,其中,所述内容时间线信息还包括所述关注点信息在各个时间段信息的赋值信息。
  5. 根据权利要求3或4所述的方法,其中,所述将所述一个或多个目标词语时间线信息按时间维度合并,以获得所述关注点信息在所述中间作品中的内容时间线信息,其中,所述内容时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标词语时间线信息中的至少一个时间点信息包括:
    对所述一个或多个目标词语时间线信息中的多个时间点信息进行聚类处理,以获得一个或多个聚类,其中每个聚类包含所述一个或多个目标词语时间线信息中至少一个时间点信息;
    根据所述一个或多个聚类生成所述关注点信息在所述中间作品中的内容时间线信息,其中每个聚类对应于所述内容时间线信息中的一个时间段信息。
  6. 根据权利要求5所述的方法,其中,所述关注点信息在所述内容时间线信息中各个时间段信息的赋值信息是基于该时间段信息对应的聚类中各时间点信息及其对应的目标关键词语确定的。
  7. 根据权利要求2至6中任一项所述的方法,其中,所述从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个目标关键词语包括:
    确定所述关注点信息对应的目标场景信息;
    根据所述关键词语对应的场景标签信息,从所述一个或多个关键词语中确定与所述关注点信息相匹配的一个或多个目标关键词语,其中每个目标关键词语对应的场景标签信息与所述目标场景信息相匹配。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述关键词语包括所述中间作品中的人物对象,所述人物对象对应的词语时间线信息包括所述人物对象在所述中间作品中的人物时间线信息,其中所述人物时间线信息包括所述人物对象在所述中间作品中的一个或多个时间点信息。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    获取所述中间作品中的多个目标人物对象对应的多个目标人物时间线信息,其中每个目标人物时间线信息对应于所述多个目标人物对象中一个目标人物对象;
    根据所述多个目标人物时间线信息生成所述多个目标人物对象在所述中间作品中的人物关联时间线信息,其中,所述人物关联时间线信息包括一个或多个时间段信息,每个时间段信息包括至少一个所述目标人物时间线信息中的至少一个时间点信息。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    根据所述人物关联时间线信息生成所述多个目标人物对象在所述中间作品中的人物关系图谱信息。
  11. 根据权利要求10所述的方法,其中,所述根据所述人物关联时间线信息生成所述多个目标人物对象在所述中间作品中的人物关系图谱信息还包括:
    根据所述人物关系图谱信息及所述人物关联时间线信息,预测所述多个目标人物对象在所述中间作品中的未来人物关系图谱信息。
  12. 根据权利要求1至11中任一项所述的方法,其中,所述方法还包括:
    在所述中间作品的创作过程中检测所述中间作品是否满足创作辅助触发条件;
    若是,确定所述创作辅助触发条件对应的一个或多个辅助关键词语,并根据所述辅助关键词语对应的词语时间线信息提供所述中间作品对应的创作参考信息。
  13. 根据权利要求12所述的方法,其中,所述根据所述辅助关键词语对应的词语时间线信息提供所述中间作品对应的创作参考信息,包括:
    将所述一个或多个辅助关键词语中至少一个辅助关键词语对应的词语时间线信息作为所述中间作品对应的创作参考信息,并提供给参与所述创作过程的用户;或者,
    将所述一个或多个辅助关键词语中至少两个辅助关键词语分别对应的词语时间线信息按时间维度合并,并将合并后的时间线信息作为所述中间作品对应的创作参考信息提供给参与所述创作过程的用户。
  14. 一种用于在创作过程中生成已创作内容的脉络信息的设备,其特征在于,所述设备包括:
    处理器,以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如权利要求1至13中任一项所述的方法。
  15. 一种存储指令的计算机可读介质,所述指令在被计算机执行时使得所述计算机进行如权利要求1至13中任一项所述方法的操作。
PCT/CN2021/119605 2020-12-23 2021-09-22 在创作过程中生成已创作内容的脉络信息的方法与设备 WO2022134683A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011545832.3 2020-12-23
CN202011545832.3A CN112685534B (zh) 2020-12-23 2020-12-23 在创作过程中生成已创作内容的脉络信息的方法与设备

Publications (1)

Publication Number Publication Date
WO2022134683A1 true WO2022134683A1 (zh) 2022-06-30

Family

ID=75451507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119605 WO2022134683A1 (zh) 2020-12-23 2021-09-22 在创作过程中生成已创作内容的脉络信息的方法与设备

Country Status (2)

Country Link
CN (1) CN112685534B (zh)
WO (1) WO2022134683A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685534B (zh) * 2020-12-23 2022-12-30 上海掌门科技有限公司 在创作过程中生成已创作内容的脉络信息的方法与设备
CN113420553A (zh) * 2021-07-21 2021-09-21 北京小米移动软件有限公司 文本生成方法、装置、存储介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294350A1 (en) * 2005-06-29 2007-12-20 Manish Kumar Methods and apparatuses for locating an application during a collaboration session
CN102572356A (zh) * 2012-01-16 2012-07-11 华为技术有限公司 记录会议的方法和会议系统
CN110851538A (zh) * 2020-01-15 2020-02-28 支付宝(杭州)信息技术有限公司 基于区块链的内容生成方法、装置、设备及存储介质
CN112685534A (zh) * 2020-12-23 2021-04-20 上海掌门科技有限公司 在创作过程中生成已创作内容的脉络信息的方法与设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323827B2 (en) * 2007-07-20 2016-04-26 Google Inc. Identifying key terms related to similar passages
CN101334784B (zh) * 2008-07-30 2011-06-15 施章祖 计算机辅助报告与知识库产生的方法
CN103324718B (zh) * 2013-06-25 2016-08-10 百度在线网络技术(北京)有限公司 基于海量搜索日志挖掘话题脉络的方法和系统
US20150302084A1 (en) * 2014-04-17 2015-10-22 Robert Stewart Data mining apparatus and method
CN107704572B (zh) * 2017-09-30 2021-07-13 北京奇虎科技有限公司 人物实体的创作角度挖掘方法及装置
CN108595403A (zh) * 2018-04-28 2018-09-28 掌阅科技股份有限公司 用于辅助撰写的处理方法、计算设备及存储介质
CN109508448A (zh) * 2018-07-17 2019-03-22 网易传媒科技(北京)有限公司 基于长篇文章生成短资讯方法、介质、装置和计算设备
CN109522402A (zh) * 2018-10-22 2019-03-26 国家电网有限公司 一种基于电力行业特征关键词的摘要提取方法及存储介质
CN109522390B (zh) * 2018-11-14 2020-11-13 山东大学 一种搜索结果展示方法和装置
CN110175220B (zh) * 2019-05-16 2023-02-17 镇江市高等专科学校 一种基于关键词位置结构分布的文档相似性度量方法及系统
CN110457439B (zh) * 2019-08-06 2022-03-01 超级知识产权顾问(北京)有限公司 一站式智能写作辅助方法、装置和系统
CN111240673B (zh) * 2020-01-08 2021-06-18 腾讯科技(深圳)有限公司 互动图形作品生成方法、装置、终端及存储介质
CN110851797A (zh) * 2020-01-13 2020-02-28 支付宝(杭州)信息技术有限公司 基于区块链的作品创作方法及装置、电子设备
CN111368063B (zh) * 2020-03-06 2023-03-17 腾讯科技(深圳)有限公司 一种基于机器学习的信息推送方法以及相关装置
CN111680152B (zh) * 2020-06-10 2023-04-18 创新奇智(成都)科技有限公司 目标文本的摘要提取方法及装置、电子设备、存储介质
CN111753508A (zh) * 2020-06-29 2020-10-09 网易(杭州)网络有限公司 文字作品的内容生成方法、装置和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294350A1 (en) * 2005-06-29 2007-12-20 Manish Kumar Methods and apparatuses for locating an application during a collaboration session
CN102572356A (zh) * 2012-01-16 2012-07-11 华为技术有限公司 记录会议的方法和会议系统
CN110851538A (zh) * 2020-01-15 2020-02-28 支付宝(杭州)信息技术有限公司 基于区块链的内容生成方法、装置、设备及存储介质
CN112685534A (zh) * 2020-12-23 2021-04-20 上海掌门科技有限公司 在创作过程中生成已创作内容的脉络信息的方法与设备

Also Published As

Publication number Publication date
CN112685534A (zh) 2021-04-20
CN112685534B (zh) 2022-12-30

Similar Documents

Publication Publication Date Title
CN107256267B (zh) 查询方法和装置
US9954964B2 (en) Content suggestion for posting on communication network
US9299342B2 (en) User query history expansion for improving language model adaptation
US9607048B2 (en) Generation of synthetic context frameworks for dimensionally constrained hierarchical synthetic context-based objects
US7996431B2 (en) Systems, methods and computer program products for generating metadata and visualizing media content
US8468146B2 (en) System and method for creating search index on cloud database
US8027999B2 (en) Systems, methods and computer program products for indexing, searching and visualizing media content
WO2022134683A1 (zh) 在创作过程中生成已创作内容的脉络信息的方法与设备
CN108604233B (zh) 用于个性化即时查询建议的媒体消费场境
US8666749B1 (en) System and method for audio snippet generation from a subset of music tracks
US20140372467A1 (en) Contextual smart tags for content retrieval
US20170300533A1 (en) Method and system for classification of user query intent for medical information retrieval system
AU2017216520A1 (en) Common data repository for improving transactional efficiencies of user interactions with a computing device
US20110153638A1 (en) Continuity and quality of artistic media collections
WO2023016349A1 (zh) 一种文本输入方法、装置、电子设备和存储介质
US20200218760A1 (en) Music search method and device, server and computer-readable storage medium
US11151154B2 (en) Generation of synthetic context objects using bounded context objects
RU2654789C2 (ru) Способ (варианты) и электронное устройство (варианты) обработки речевого запроса пользователя
CN111078849B (zh) 用于输出信息的方法和装置
US11437038B2 (en) Recognition and restructuring of previously presented materials
US9785724B2 (en) Secondary queue for index process
US20140006461A1 (en) Difference analysis in file sub-regions
US11836197B2 (en) Search processing method and apparatus based on clipboard data
US20150161092A1 (en) Prioritizing smart tag creation
WO2022142617A1 (zh) 一种会议群组拆分的方法与设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21908684

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.11.2023)