CN112230838A - Article processing method, article processing device, article processing equipment and computer readable storage medium - Google Patents

Article processing method, article processing device, article processing equipment and computer readable storage medium Download PDF

Info

Publication number
CN112230838A
CN112230838A CN202011229208.2A CN202011229208A CN112230838A CN 112230838 A CN112230838 A CN 112230838A CN 202011229208 A CN202011229208 A CN 202011229208A CN 112230838 A CN112230838 A CN 112230838A
Authority
CN
China
Prior art keywords
content
article
target text
media
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011229208.2A
Other languages
Chinese (zh)
Inventor
罗绮琪
王靖文
邱蕾冰
席俊惠
沈艳慧
张倩
李麟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011229208.2A priority Critical patent/CN112230838A/en
Publication of CN112230838A publication Critical patent/CN112230838A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

The application provides an article processing method, an article processing device and a computer readable storage medium; the method comprises the following steps: in response to a content viewing operation for an article, presenting content of the article in a view interface; when target text content in the content is at the view interface, presenting media indication information of the target text content; wherein the media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content; and when a trigger operation aiming at the media indication information is received, playing the media content. By the method and the device, the media indication information corresponding to the text content can be presented in the view interface, and the presentation mode of the content in the article presentation process is enriched.

Description

Article processing method, article processing device, article processing equipment and computer readable storage medium
Technical Field
The present application relates to the field of mobile internet technologies, and in particular, to an article processing method, an article processing apparatus, an article processing device, and a computer-readable storage medium.
Background
Articles in the form of novels, news, forum posts, etc. have been widely used in information flow products as they can provide content consumption in multiple dimensions, thereby effectively increasing the duration of using the information flow products by users.
In the related art, the content in the article is usually presented in a plain text manner, and the user reads the content of the article through a character or outputs the content of the article through a plain audio playing manner, that is, the user listens to the reading of the article by others to know the content of the article, and the display manner of the content of the article is single.
Disclosure of Invention
The embodiment of the application provides an article processing method, an article processing device, an article processing apparatus and a computer-readable storage medium, which can present media indication information corresponding to text content in a view interface, and enrich the presentation mode of the content in the article presentation process.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides an article processing method, including:
in response to a content sliding operation for an article, presenting content of the article in a view interface;
when target text content in the content is at the view interface, presenting media indication information of the target text content;
wherein the media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content;
and when a trigger operation aiming at the media indication information is received, playing the media content.
In the above solution, the presenting the content of the article in the view interface in response to the content viewing operation for the article includes:
responding to the content viewing operation aiming at the article, and acquiring a current article reading mode;
and if the current article reading mode is the automatic reading mode, presenting the content corresponding to the current reading position in the view interface, and automatically updating the content of the presented article.
In the above solution, the presenting the content of the article in the view interface in response to the content viewing operation for the article includes:
when the content viewing operation is a sliding operation for the article, presenting the content in the article in a sliding state in a view interface in response to the sliding operation; alternatively, the first and second electrodes may be,
and when the content viewing operation is a clicking operation for the article, responding to the clicking operation, turning pages of the article in a view interface, and presenting the content of the article after the pages are turned.
An embodiment of the present application provides an article processing apparatus, including:
the first presentation module is used for responding to content viewing operation aiming at the article and presenting the content of the article in a view interface;
the second presentation module is used for presenting the media indication information of the target text content when the target text content in the content is in the view interface;
wherein the media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content;
and the playing module is used for playing the media content when receiving the triggering operation aiming at the media indication information.
In the above solution, before the content of the article is presented in the view interface, the apparatus further includes:
the adjusting module is used for presenting the presentation mode functional items corresponding to the content of the article;
in response to a triggering operation for the presentation mode function item, adjusting a presentation mode of content of the article to a media presentation mode;
the second presentation module is further configured to present media indication information of a target text content when the target text content in the content is in the view interface and the presentation mode is a media presentation mode.
In the above scheme, the first presentation module is further configured to, when the content viewing operation is a sliding operation for the article, present, in response to the sliding operation, content in a sliding state in the article in a view interface; alternatively, the first and second electrodes may be,
and when the content viewing operation is a clicking operation for the article, responding to the clicking operation, turning pages of the article in a view interface, and presenting the content of the article after the pages are turned.
In the above scheme, the first presentation module is further configured to obtain a current article reading mode in response to a content viewing operation for an article;
and if the current article reading mode is the automatic reading mode, presenting the content corresponding to the current reading position in the view interface, and automatically updating the content of the presented article.
In the foregoing solution, the second presenting module is further configured to, when a target text content in the content is in a target area of the view interface, present media indication information of the target text content at a presentation position of the target text content in the target area to replace the target text content.
In the foregoing solution, the presentation module is further configured to display the target text content in the content in a target area of the view interface in a target display style, and display the target text content in the target display style
Presenting media indication information of the target text content in a media display area in the view interface;
the target display style is used for distinguishing from other contents except the target text contents in the view interface.
In the above scheme, when the media content is a video content, the playing module is further configured to present a playing window suspended in the view interface, and play the video content through the playing window.
In the above scheme, the playing module is further configured to skip a page to a detail page indicated by the media indication information;
and playing the media content in the detail page.
In the above scheme, the playing module is further configured to obtain a content scene corresponding to the target text content;
and presenting a background image matched with the content scene in a playing interface of the media content, and playing the media content based on the playing interface presenting the background image.
In the above scheme, the playing module is further configured to play the media content during the process of viewing the content of the article, and to play the media content
And when the target text content is not presented on the view interface and the media content is not played completely, stopping playing the media content.
In the above solution, before the content of the article is presented in the view interface, the method further includes:
an obtaining module, configured to send an obtaining request for content of the article, where the obtaining request carries an article identifier of the article, and the article identifier is used for finding the content of the article and media content corresponding to target text content in the content;
receiving the content of the article and the media content corresponding to the target text content in the content;
and performing material replacement to generate the media indication information of the target text content based on the media content.
In the above scheme, the apparatus further comprises:
the screening module is used for carrying out target symbol detection on the content of the article, and the target symbol is used for indicating the text content of a target scene;
when the target symbol is detected to exist in the content of the article, extracting the text content of the target scene indicated by the target symbol as the target text content in the content.
In the above scheme, the screening module is further configured to perform semantic analysis on the content of the article to obtain an analysis result;
and extracting text content conforming to the target scene from the content as the target text content based on the analysis result.
In the above scheme, the screening module is further configured to extract text content of a target scene from the content of the article to obtain at least two target text segments;
respectively acquiring scene categories corresponding to the target text segments, and acquiring the interest degrees of target users for the scene categories;
based on each interestingness, performing descending sequencing on the target text segments to obtain corresponding target text segment sequences;
and selecting target text segments with a target quantity from the first target text segment in the target text segment sequence as the target text content.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the article processing method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions and is used for causing a processor to execute the executable instructions so as to realize the article processing method provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
in the process that a user views article content presented in a view interface, if target text content appears in the view interface, media indicating information for indicating that media content such as audio content or video content corresponding to the target text content exists is presented, and when the user triggers the media indicating information, the corresponding media content is played; therefore, the corresponding media contents such as audio contents and/or video contents can be checked on the view interface based on the media indication information, the content presentation mode in the article presentation process is enriched, meanwhile, the emotion transfer dimensionality is enriched, and the reading experience is improved.
Drawings
FIGS. 1A-1B are schematic views of an article reading mode provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative architecture of an article processing system provided by an embodiment of the present application;
fig. 3 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating an alternative method for processing articles according to an embodiment of the present application;
FIG. 5 is a schematic view of page sliding provided in the embodiment of the present application;
FIG. 6 is a schematic diagram of a page display provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of a page display provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a page display provided in an embodiment of the present application;
fig. 9 is a schematic view of processing a dialog scenario provided in an embodiment of the present application;
fig. 10 is a schematic view of processing a dialog scenario provided in an embodiment of the present application;
fig. 11 is a schematic view of a playing interface provided in the embodiment of the present application;
fig. 12 is a schematic view of a playing interface provided in an embodiment of the present application;
FIG. 13A is a schematic flow chart diagram illustrating an alternative method for processing articles according to an embodiment of the present application;
FIG. 13B is a schematic flow chart diagram illustrating an alternative method for processing articles according to an embodiment of the present application;
FIG. 14 is a schematic flow chart diagram illustrating an alternative method for processing articles according to an embodiment of the present application;
fig. 15 is an alternative configuration diagram of an article processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal for providing various services, such as a video playing client.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Referring to fig. 1A-1B, fig. 1A-1B are schematic diagrams of article reading manners provided in an embodiment of the present application, and in fig. 1A, an article (such as a novel) in an information flow product shows all contents of the article in text forms such as characters and pictures for a user to read; in fig. 1B, a user obtains the voice content corresponding to the article content in an "listening" manner, that is, the reading mode has an independent listening function, and the content of the whole article (for example, the whole novel) is translated into voice in a word-by-word manner in a manual reading manner or a machine voice reading manner, and the user enters the listening function to listen and read completely, but this manner is separated from the scene of text reading, and cannot be tightly combined with the text content, so that a new independent content obtaining manner is formed, and does not belong to the "reading" category in a strict sense; therefore, the above manner is used for obtaining the article content by reading alone or listening alone, and the dimension for transferring the emotion of the conversation content is relatively single.
Based on this, the embodiment of the application provides an article processing method, an article processing device and a computer-readable storage medium, which can replace original text content with media indication information in a view interface, and further view media content such as corresponding audio content and/or video content based on the media indication information, thereby enriching the presentation mode of the content in the article presentation process.
Referring to fig. 2, fig. 2 is an alternative architecture diagram of the article processing system 100 according to the embodiment of the present application, in order to support an exemplary application, the terminals (exemplary terminal 400-1 and terminal 400-2 are shown) are connected to the server 200 through the network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to implement data transmission.
In practical application, the terminal may be various types of user terminals such as a smart phone, a tablet computer, a notebook computer, and the like, and may also be a desktop computer, a game console, a television, or a combination of any two or more of these data processing devices; the server 200 may be a single server configured to support various services, may also be configured as a server cluster, may also be a cloud server, and the like.
In actual implementation, the terminal is provided with a client, such as a short video client, a browser client, a reading client, and the like. When a user opens a client on a terminal to browse an article, the terminal responds to the content viewing operation aiming at the article and presents the content of the article in a view interface; when the target text content in the content is in the view interface, presenting media indication information of the target text content; when receiving a trigger operation for the media indication information, sending an acquisition request of the media content to the server 200; the server 200 determines the media content corresponding to the target text content based on the acquisition request, and returns the media content to the terminal, and the terminal plays the media content after receiving the media content.
Referring to fig. 3, fig. 3 is an optional schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in practical applications, the electronic device 500 may be the terminal 400-1, the terminal 400-2, or the server 200 in fig. 2, and the electronic device is the terminal 400-1 or the terminal 400-2 shown in fig. 2 as an example, so as to describe the electronic device that implements the article processing method in the embodiment of the present application. The electronic device 500 shown in fig. 3 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 3.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the article processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 shows an article processing apparatus 555 stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: a first rendering module 5551, a second rendering module 5552 and a playing module 5553, which are logical and thus can be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be explained below.
In other embodiments, the article processing apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the article processing apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the article processing method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The following describes the document processing method provided in the embodiments of the present application, and in actual implementation, the document processing method provided in the embodiments of the present application may be implemented by a server or a terminal alone, or may be implemented by cooperation of a server and a terminal.
Referring to fig. 4, fig. 4 is an alternative flow chart diagram of an article processing method provided in the embodiment of the present application, which will be described with reference to the steps shown in fig. 4.
Step 101: and the terminal responds to the content viewing operation aiming at the article and presents the content of the article in the view interface.
In practical application, a terminal is provided with a client, such as a short video client, a browser client, a reading client, and the like. When a user opens a client on the terminal, the terminal responds to the opening operation of the user on the client, operates the client and presents the content of a corresponding article in a view interface, wherein the article can be literary works such as novels and prose, news or forum posts and the like, and the content of the article can be content in a pure text form or a form of combining texts and pictures.
In some embodiments, the terminal may present the content of the article in the view interface in response to a content viewing operation for the article by:
when the content viewing operation is a sliding operation for the article, presenting the content in the article in a sliding state in a view interface in response to the sliding operation; or when the content viewing operation is a click operation for the article, turning pages of the article in the view interface in response to the click operation, and presenting the content of the turned article.
Here, when the user slides the article in the view interface, the terminal makes the content of the article presented on the view interface slide in response to the sliding operation, and accordingly, as the user slides the view interface, the relative position of the content of the article on the view interface also changes, that is, the sliding of the presented article content is realized through the sliding operation. When a user clicks an article in the view interface, the terminal responds to the clicking operation and turns pages of the article in the current view interface, namely, the pages of the article are turned through the clicking operation of the user.
Referring to fig. 5, fig. 5 is a schematic page sliding diagram provided in the embodiment of the present application, as shown in fig. 5, when a user slides contents of an article upward in a view interface 501, a relative position of the contents of the article in the view interface 501 changes, for example, when the user slides upward, a relative position of a target text content 502 in the view interface 501 changes, and compared with before the user slides upward, a relative position of the target text content 502 after the user slides upward on the view interface 501 also moves upward.
In some embodiments, the terminal may also present the content of the article in the view interface in response to a content viewing operation for the article by:
responding to the content viewing operation aiming at the article, and acquiring a current article reading mode; and if the current article reading mode is the automatic reading mode, presenting the content corresponding to the current reading position in the view interface, and automatically updating the content of the presented article.
Here, if the current article reading mode is the automatic mode, the content of the presented article can be updated in an automatic scrolling manner; or update the content of the presented article in the form of automatic page turning.
Step 102: when the target text content in the content is at the view interface, media indication information of the target text content is presented.
The media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content; the target text content is the content of segments, such as dialogs or famous sentences, involved in the article. In the process that a user views the article content in the view interface, if the target text content appears in the view interface, media indication information used for indicating that the target text content exists is presented.
In some embodiments, prior to presenting the content of the article in the view interface, the media presentation mode may be set by: presenting a presentation mode function item of the content of the corresponding article; in response to a trigger operation for the presentation mode function item, adjusting the presentation mode of the content of the article to a media presentation mode; accordingly, when the target text content in the content is presented in the view interface, the media indication information of the target text content can be presented by the following modes: and when the target text content in the content is in the view interface and the presentation mode is the media presentation mode, presenting the media indication information of the target text content.
On the premise that the presentation mode of the content of the article is set to be the media presentation mode, if the content of the article has the target text content in the process of browsing the article by the user, the media indication information corresponding to the target text content is presented to stimulate the user to trigger the media indication information to know the target text content in the article through the corresponding media content, so that the emotion, the sound and the emotion of the target text content are more truly and closely transferred to the user. When the user closes the media presentation mode, all contents including the target text contents in the article are presented in a pure text form or a form of combining the text and the picture, and the original reading mode and experience of the user are not disturbed.
In practical application, before a server corresponding to a client (for example, a server corresponding to a reading client) pushes content of an article to the client for presentation, the server performs target text recognition on the content of the article to determine target text content in the article, marks the target text content, and returns mark information to a terminal, and the terminal performs dotting or embedding on a position indicated by the mark information based on the mark information to present media indication information indicating that media content corresponding to the target text content exists when an embedding point corresponding to the target text content is identified in a view interface.
In some embodiments, the terminal may determine the target text content by: carrying out target symbol detection on the content of the article, wherein the target symbol is used for indicating the text content of a target scene; when the target symbol exists in the content of the article, extracting the text content of the target scene indicated by the target symbol as the target text content in the content of the article.
The text content of the target scene is the text content or the famous-brand clause of the dialog scene, and the text content generally refers to the text in the article as [: the text content indicated by the target symbol shown in "", or "" (i.e., the text content between the quotation marks). In actual implementation, target symbol detection is performed on the content of the article, that is, whether a target symbol mark appears in the article is judged, when the target symbol is detected, the text content indicated by the target symbol is extracted, the extracted text content is screened, and the text content of types such as word quotation and metaphor is excluded.
In some embodiments, the terminal may also determine the target text content by: performing semantic analysis on the content of the article to obtain an analysis result; based on the analysis result, text content conforming to the target scene is extracted from the content as target text content.
In practical applications, the start flag of a dialog scene is usually: sentence end portions are classified as ". say", "... ask,". ask one "," XX ask XXX ","... say, "speak,". ask, ".. say," yell, ". say. Shout of small white rabbits: "why do you fly so low? "; the end flags for a dialog scenario are typically: the next sentence at the end of the conversation appears another subject and contains "X X say," e.g., "because rain came in," a swallow said. Based on the method, semantic analysis can be carried out on the content of the article, and the text content which accords with the target scene is obtained through screening and is used as the target text content.
In some embodiments, the terminal may also determine the target text content by: extracting text content of a target scene from the content of the article to obtain at least two target text segments; respectively acquiring scene categories corresponding to the target text segments, and acquiring the interest degrees of target users for the scene categories; based on each interestingness, performing descending sequencing on the target text segments to obtain corresponding target text segment sequences; and selecting target text segments with target quantity from the first target text segment in the target text segment sequence as target text content.
Here, the target scene may be a dialog scene, and when there are multiple target text segments (i.e., dialog segments) that conform to the target scene in the article, a scene category corresponding to each target text segment is obtained, where the scene categories include: the method comprises the steps of selecting a plurality of target text segments (namely conversation segments) according to the interest degree or the like of a user in each scene category, and selecting the target text segments which are more interested by the user as target text contents. After the target text content is determined, media content corresponding to the target text content is also obtained, for example, the target text content is manually recorded into audio content, or the target text content is subjected to voice conversion to obtain corresponding audio content, and the audio content is stored in a corresponding voice material library.
Referring to fig. 9, fig. 9 is a schematic view of processing a dialog scene provided in the embodiment of the present application, and as shown in fig. 9, a score corresponding to a daily dialog scene is 1, a score corresponding to a whitelisted dialog scene is 5, and a score corresponding to a duet dialog scene is 5, then, according to an order of scores (or interestingness) from high to low, each target text segment is sorted to obtain a descending target text segment sequence, and 1 or more target text segments that are interested in comparison by a user are selected from the target text segment sequence as target text content; and simultaneously acquiring media content corresponding to the target text content, for example, manually recording the target text content into audio content, or performing voice conversion on the target text content to obtain corresponding audio content, and storing the audio content into a corresponding voice material library.
In practical applications, the media content may be further extended to other media elements besides audio elements, and in some embodiments, a background image adapted to the content scene is obtained and stored to the photo library based on the content scene (i.e. the delivered meaning) of the target text content, where the background image is used to provide background material when the media content is played, i.e. the media content is played through a playing interface containing the corresponding background image, for example, for the target text content of "do my friend bar", a photo containing "rose" style is obtained and stored to the photo library, so as to present the photo containing "rose" style when the media content corresponding to the target text content is played. In other embodiments, a related target scene control is acquired and stored to a scene control library based on the meaning of the target text content, where the target scene control is used to provide a corresponding scene material when the media content is played, and the scene control includes a call control, a short message control, a mail control, a video control, or the like, so as to play the media content through a play scene including the target scene, for example, when the target text content is described by a target user through making a call, a call scene including the call control is played in a play interface when the media content corresponding to the target text content is played, so that the user can experience a conversation scene through an immersive play interface.
Referring to fig. 10, fig. 10 is a schematic view of processing a dialog scene provided in the embodiment of the present application, and as shown in fig. 10, a score corresponding to a daily dialog scene is 1, a score corresponding to a whitelisted dialog scene is 5, and a score corresponding to a duet dialog scene is 5, then, according to an order of scores (or interestingness) from high to low, each target text segment is sorted to obtain a descending target text segment sequence, and 1 or more target text segments that are interested in comparison by a user are selected from the target text segment sequence as target text content; and simultaneously acquiring media content corresponding to the target text content, wherein the media content comprises media elements such as voice, pictures and scene controls, and operators can configure the media elements according to the target text content and the upper and lower scenes to form the media content at least comprising the media elements of at least one of the voice, the pictures and the scene controls for presentation.
In some embodiments, when the user views a certain article in the current view interface, the terminal may further obtain the content of the article and the media indication information of the target text content by: sending an acquisition request aiming at the content of an article, and acquiring an article identifier of the article carried in the request, wherein the article identifier is used for searching the content of the article and the media content corresponding to the target text content in the content; receiving the content of the article and the media content corresponding to the target text content in the content; and performing material replacement to generate the media indication information of the target text content based on the media content.
Here, all contents of an article, including media contents corresponding to target text contents, are stored in a server (e.g., a server corresponding to a reading client) corresponding to an information stream article, when a user views contents of an article on a view interface, a terminal sends an acquisition request for acquiring the contents of the article to the server, the server filters out contents of the article, tag information for the target text contents, and media contents corresponding to the target text contents from the article identification carried in the acquisition request, and returns the contents of the article, the tag information for the target text contents, and the media contents obtained by filtering to the terminal. The method comprises the steps that a terminal presents the content of an article, when mark information aiming at target text content is identified in a current view interface (namely the target text content slides to a corresponding buried point), a media indication mark corresponding to the media content is determined based on the type of the received media content, if the media content is audio content, the media indication information can be a voice bubble, and if the media content is video content, the media indication information can be a player icon.
In some embodiments, the terminal may implement that when the target text content in the content is in the view interface, the terminal presents the media indication information of the target text content by: when the target text content in the content is in the target area of the view interface, the media indication information of the target text content is presented at the presentation position of the target text content in the target area to replace the target text content.
Correspondingly, the terminal can also determine the target area by the following method: acquiring a text height corresponding to the target text content, a position height of the target text content in the view interface, and a visual height and a rolling height of the view interface; taking the difference value of half of the visible height as the reference height of the corresponding target text content; and when the difference value of the position height and half of the text height is not lower than the reference height, determining that the target text content in the content is in the target area of the view interface.
In practical application, when the target text content of the article is presented in the target area of the view interface, for example, when the target text content is presented at the middle-upper position of the view interface, the original target text content is replaced by the media indication information of the target text content, that is, only the corresponding media indication information is presented at the presentation position of the original target text content, the media indication information is used as a part of the content of the article and inserted between the text contents, and the user can know the target text content through the media content indicated by the media indication information, so that the emotion, sound and emotion of the target text content are more truly delivered to the user.
Referring to fig. 6 to 7, fig. 6 to 7 are schematic page display diagrams provided in an embodiment of the present application, in fig. 6, when target text content 601 (i.e., dialog content) of an article is presented at an upper middle position (i.e., a target area) of a view interface, a media indicator 602 corresponding to the target text content 601 is presented at the position where the target text content 601 is presented, that is, the media indicator 602 replaces the target text content 601, where the media indicator 602 is a voice bubble indicating that media content corresponding to the target text content is audio content. In fig. 7, when the target text content 701 (i.e. dialog content) of the article is presented in the middle-upper position (i.e. the target area) of the view interface, the media indicator 702 corresponding to the target text content 701 is presented at the position where the target text content 701 is presented, i.e. the media indicator 702 replaces the target text content 701, where the media indicator 702 is a player icon indicating that the media content corresponding to the target text content is video content.
In some embodiments, the terminal may further enable presenting media indication information of the target text content when the target text content in the content is at the view interface by: when the target text content in the content is in a target area of the view interface, displaying the target text content in a target display style, and presenting media indication information of the target text content in a media display area of the view interface; and the target display style is used for distinguishing from other contents except the target text contents in the view interface.
When the target text content of the article is presented in a target area of the view interface, for example, the target text content is presented at the middle-upper position of the view interface, the target text content is displayed in a target display style, wherein the target display style is a display style different from other content, for example, the target text content is displayed in a style different from other content, such as a font different from other content, a character color different from other content, a display background different from other content, a character transparency different from other content, and the like, and is distinguished from other content, so that a user is prompted with a highlight, and the reading experience of the user is improved; presenting the corresponding media indication information while presenting the target text content; therefore, more choices are provided for the user, and the user can conveniently select a mode suitable for the user to know the specific content of the target text content.
Referring to fig. 8, fig. 8 is a schematic page display diagram provided in an embodiment of the present application, in fig. 8, when target text content 801 (i.e., dialog content) of an article is presented in an upper middle position (i.e., a target area) of a view interface, the target text content 801 is displayed in an "italic" style to be distinguished from other content, and simultaneously, the target text content 801 is presented and a media indicator 802 corresponding to the target text content 801 is also presented.
Step 103: and when the triggering operation aiming at the media indication information is received, playing the media content.
In some embodiments, when a user views an article in a view interface and views media indication information corresponding to target text content in the article, the media indication information is not necessarily triggered, so when a terminal requests to obtain related content of the article from a server, the server may only return the content of the article and tag information for the target text content to the terminal, and does not need to return hidden content corresponding to the target text content to the terminal, in this case, if the user triggers the media indication information, the terminal needs to request to obtain the media content indicated by the media indication information from the server in the following manner:
sending an acquisition request aiming at the media content, and acquiring an article identifier of an article carried in the request, wherein the article identifier is used for searching the content of the article and acquiring the media content corresponding to the target text content; and receiving the media content corresponding to the target text content.
In practical application, all contents of an article, including media contents corresponding to target text contents, are stored in a server, when a user triggers media indication information corresponding to the target text contents in the article, a terminal sends an acquisition request for acquiring the media contents corresponding to the target text contents to the server, the server obtains the media contents corresponding to the target text contents in the article through screening based on article identifiers carried in the acquisition request, and returns the media contents corresponding to the target text contents obtained through screening to the terminal for presentation.
When the user triggers the media indication information, the terminal responds to the triggering operation and plays the media content indicated by the media indication information. For example, when the user triggers the media indication information 602 shown in fig. 6, the audio content corresponding to the target text content is played.
In some embodiments, when the media content is video content, the terminal may play the media content by: and displaying a playing window suspended in the view interface, and playing the video content through the playing window.
For example, when the user triggers the media indication information 702 shown in fig. 7, a play window 1101 shown in fig. 11 will be presented, where fig. 11 is a schematic view of a play interface provided in the embodiment of the present application, and video content indicated by the media indication information 702 in fig. 7 is played in the play window 1101.
In some embodiments, the terminal may also play the media content by: skipping the page to a detail page indicated by the media indication information; in the details page, the media content is played.
Here, when the user triggers the media indication information, the terminal performs a page jump to a detail page, such as to the H5 link, into a dialog scene more immersive and restoring the target text content, and plays audio content or video content corresponding to the target text content in the detail page, such as the H5 link, in response to the trigger operation.
In some embodiments, the terminal may also play the media content by: acquiring a content scene corresponding to target text content; and presenting a background image matched with the content scene in a playing interface of the media content, and playing the media content based on the playing interface presented with the background image.
Referring to fig. 12, fig. 12 is a schematic view of a playing interface provided in the embodiment of the present application, and when a user triggers the media indication information 702 shown in fig. 7, the playing interface shown in fig. 12 is presented, the playing interface contains a background image adapted to a content scene of the target text content, and audio content or video content is played in the playing interface.
In some embodiments, the terminal may also play the media content by: and in the process of viewing the content of the article, playing the media content, and stopping playing the media content when the target text content is not presented on the view interface and the media content is not played completely.
Here, when the target text content is not on the current view page, for example, the displayed article content is slid by sliding, so that the target text content slides out of the current view interface, or the page turning of the article is realized by a click operation of the user, so that the target text content is not displayed on the current view interface, which indicates that the user finishes watching the media information related to the target text content in the view interface or is not interested in the media information of the target text content, at this time, the playing of the media information corresponding to the target text content is stopped.
Next, continuing to describe the article processing method provided in the embodiment of the present application, referring to fig. 13A, fig. 13A is an optional flowchart of the article processing method provided in the embodiment of the present application, where a reading client is installed on a terminal, and a server is a server corresponding to the reading client, which will be described with reference to the steps shown in fig. 13A.
Step 201: the server extracts the text content of the target scene from the content of the article to obtain at least two target text segments.
Here, the target scene is a dialog scene, and in practical implementation, target symbol detection may be performed on the content of the article, and when a target symbol is detected to exist in the content of the article, the text content of the target scene indicated by the target symbol is extracted as a target text segment, where the target symbol is used to indicate the text content of the target scene, such as [: the target text content indicated by the target symbol shown in "", or "", is the conversation content.
Step 202: the server respectively obtains the scene categories corresponding to the target text segments, and obtains the interest degrees of the target users for the scene categories.
Step 203: and determining the target text segment corresponding to the interestingness exceeding the interestingness threshold value as the target text content.
Here, when there are at least two target text segments (i.e., dialog contents), the target text content in which the user is interested is obtained by filtering based on the level of interest of the target user.
Step 204: and the server acquires the media content corresponding to the target text content.
For example, the target text content may be subjected to voice conversion to obtain corresponding audio content, or video content adapted to the target text content may be acquired.
Step 205: the server stores the content of the article, the target text content in the content and the media content corresponding to the target text content.
Through the steps, before the article (or each novel) is pushed to the terminal, the server firstly identifies and screens the conversation content in the article content to obtain the target text content which is more interesting to the user, generates the media content which is matched with the target text content, and further stores and labels the article identifier of the article, the article content, the target text content in the content and the corresponding media content.
Step 206: the terminal responds to the content viewing operation aiming at the article and sends an acquisition request aiming at the content of the article to the server.
Here, when a user reads an article through a client of the terminal, the presentation mode of the article may be set to the media presentation mode, and in the media presentation mode, when the user views an article in a view interface during browsing the article, the user needs to request the server to acquire the content of the article. And if the target text content exists in the content of the article, presenting the media indication information corresponding to the target text content so as to stimulate the user to trigger the media indication information to know the target text content in the article through the corresponding media content.
Step 207: the server determines the content of the article and the media content corresponding to the target text content in the content based on the acquisition request.
Here, the server finds the content of the article and the media content corresponding to the target text content in the content based on the article identifier carried by the acquisition request.
Step 208: the server returns the content of the article and the media content corresponding to the target text content in the content to the terminal.
Step 209: and the terminal responds to the content viewing operation aiming at the article and presents the content of the article in the view interface.
Step 210: and when the target text content is in the view interface and the presentation mode is the media presentation mode, the terminal presents the media indication information of the target text content.
Here, on the premise that the presentation mode of the article is set to the media presentation mode, in the process of browsing the article, if the content of the article has the target text content and the target text content is presented on the half screen of the current view interface, a display instruction of media indication information is triggered, and the terminal presents the media indication information corresponding to the target text content in response to the display instruction, where the media indication information is used to indicate that at least one of the following media contents corresponding to the target text content exists: audio content and video content.
Step 211: and responding to the triggering operation aiming at the media indication information, and playing the media content.
Referring to fig. 13B, fig. 13B is a schematic view of an optional flow of an article processing method according to an embodiment of the present application, where a reading client is installed on a terminal, and a server is a server corresponding to the reading client, and the steps shown in fig. 13B will be described.
Step 301: the server extracts the text content of the target scene from the content of the article to obtain at least two target text segments.
Here, the target scene is a dialog scene, and in practical implementation, target symbol detection may be performed on the content of the article, and when a target symbol is detected to exist in the content of the article, the text content of the target scene indicated by the target symbol is extracted as a target text segment, where the target symbol is used to indicate the text content of the target scene, such as [: the target text content indicated by the target symbol shown in "", or "", is the conversation content.
Step 302: the server respectively obtains the scene categories corresponding to the target text segments, and obtains the interest degrees of the target users for the scene categories.
Step 303: and the server determines the target text segment corresponding to the interestingness exceeding the interestingness threshold value as the target text content.
Through the steps, before the article is pushed to the terminal for each article (or each novel), the server firstly extracts the conversation segments of the article content, screens the conversation segments which are interested by the user as target text content, and acquires and stores media information corresponding to the target text content after the target text content in the article is identified.
Step 304: the server returns the content of the article and the target text content in the content to the terminal.
Step 305: and the terminal buries points at the position corresponding to the target text content in the article based on the target text content.
Through the steps, for each article, the background server stores the content of the article, the target text content in the content and the media content corresponding to the target text content; and the terminal at the front end buries the article based on the content of the article pushed by the server and the target text content.
Step 306: and the terminal responds to the content viewing operation aiming at the article and presents the content of the article in the view interface.
Step 307: when the buried point is identified, the terminal presents media indication information of the target text content.
Here, when a buried point is identified in the view interface (e.g., the content of the sliding article reaches the buried point), the terminal identifies corresponding target text content (e.g., a dialog segment), and triggers a display instruction of media indication information, and the terminal presents the media indication information of the target text content in response to the display instruction, wherein the media indication information is used for indicating that media content corresponding to the target text content exists.
Step 308: and the terminal responds to the triggering operation aiming at the media indication information and sends an acquisition request of the media content to the server.
Step 309: the server determines the media content indicated by the media indication information based on the acquisition request.
Step 310: the server returns the media content to the terminal.
Step 311: the terminal plays the media content.
Through the steps, as all the contents of the article including the media content corresponding to the target text content are stored in the server, when the user triggers the media indication information, the terminal sends an acquisition request for acquiring the media content corresponding to the target text content to the server, the server screens the media content corresponding to the target text content in the article from the article identification carried in the acquisition request, and returns the media content corresponding to the target text content obtained by screening to the terminal for presentation.
In the following, an exemplary application of the embodiment of the present application in an application scenario in which an article is a novel and a user reads the novel will be described.
Referring to fig. 14, fig. 14 is an alternative flow chart of an article processing method provided in the embodiment of the present application, which will be described with reference to the steps shown in fig. 14.
Step 401: and the terminal responds to the viewing operation aiming at the novel content and presents the novel content in the view interface.
Here, the view interface is a display page of novel content, and the novel content may be content in a text form or content in a form of text and picture combination.
Step 402: and carrying out target symbol detection on the novel content, and extracting the conversation fragment of the conversation scene indicated by the target symbol when the target symbol is detected to exist in the novel content.
Here, the target symbol is used to indicate the dialog content (i.e., the text content described above) of the dialog scene, such dialog content generally referred to in the novel as [: the target symbol shown in "", or "" "(i.e., the conversation content between the double quotation marks).
In practical implementation, target symbol detection is performed on the novel content, that is, whether a target symbol mark appears in the novel is judged, when the target symbol is detected, the dialogue content indicated by the target symbol is extracted, the extracted dialogue content is screened, the dialogue contents of types such as word references and metaphors are excluded, and dialogue segments of dialogue scenes (that is, the target text segments) are obtained, and through the above manner, the dialogue segments can be identified from the novel content in the current view interface.
Step 403: and scoring the extracted conversation fragments to obtain corresponding scores.
Here, when the extracted dialog segment is to be typed, a scene type to which the dialog segment belongs is first determined, where the scene type includes: the scoring of each scene category is performed according to the interest degree or the like of the user in each scene category, as shown in fig. 9, the score corresponding to the daily dialog scene is 1, the score corresponding to the whitetouch dialog scene is 5, the score corresponding to the playtouch dialog scene is 5, and the score of the dialog segment can be obtained after the dialog segment is subjected to semantic analysis to determine the dialog scene to which the dialog segment belongs.
Step 404: and judging whether the score exceeds a score threshold value.
Here, when the score corresponding to the dialog segment exceeds the score threshold, step 405 is performed; when the score corresponding to the dialog segment does not exceed the score threshold, step 407 is performed.
Step 405: and presenting the media indication information corresponding to the conversation fragment.
Here, when the score corresponding to the dialog segment exceeds the score threshold value, which characterizes that the user is interested in the dialog segment, the media indication information corresponding to the dialog segment is presented, wherein the media indication information is used for indicating that the media content corresponding to the dialog segment exists. For example, the dialog segment is subjected to voice conversion to obtain corresponding audio content, in this case, the presented media indication information may be a voice bubble, and the media indication information indicates that there is audio content corresponding to the dialog segment; if the media content is video content, the media indication information may be a player icon.
Step 406: and responding to the triggering operation aiming at the media indication information, and playing the media content.
In practical application, when the user triggers the media indication information, the terminal responds to the triggering operation to play the media content indicated by the media indication information. For example, when the user triggers the media indication information 602 shown in fig. 6, the audio content corresponding to the dialog segment is played; when the user triggers the media indication information 702 shown in fig. 7, the video content corresponding to the dialog segment is played.
In some embodiments, when the user triggers the media indication information, a page jump is made to the H5 link, playing the corresponding media content in the H5 linked web page. The media content is obtained by referring to fig. 10, obtaining a background image adapted to a content scene (i.e. a delivered meaning) of the target text content based on the content scene, wherein the background image is used for providing a presented background material when the media content is played, i.e. playing the media content through a playing interface containing the corresponding background image, for example, for a dialog segment "do my girls", an image containing a "rose" style is obtained to present the image containing the "rose" style when the media content corresponding to the dialog segment is played.
In other embodiments, a related target scene control is acquired based on the meaning transmitted by the dialog segment, where the target scene control is used to provide a corresponding scene material when playing media content, and the scene control includes an incoming call control, a short message control, an email control, a video control, or the like, so as to play the media content through a play scene including the target scene, for example, when the dialog segment is described by the target user through making a call, a phone call scene including the incoming call control in a play interface when playing the media content corresponding to the dialog segment, so that the user can experience the dialog scene through an immersive play interface.
Step 407: and presenting the dialog segments in the novel content.
Through the method, in the process of reading the novel, if the novel content presented by the current view interface has a dialogue or a sentence paragraph, the corresponding media indication information is presented to the user in real time, the user is stimulated to click and listen to the phonetic dialogue or play the video dialogue, the scene of the novel dialogue is restored, the emotion, the sound and the emotion of the dialogue are more truly and closely conveyed to the user through the mode of combining audio and video, the dimensionality of the novel emotion transmission is expanded, the novel reading experience is optimized, the novel is more attractive, the viscosity and the retention of the user to the novel are improved, the user is enabled to generate emotional connection to the novel, and the novel is capable of cooperating with sound and stars to provide more opportunities of business cooperation and business change.
Continuing with the exemplary structure of the article processing apparatus 555 implemented as a software module provided in this embodiment of the present application, in some embodiments, referring to fig. 15, fig. 15 is an optional structural schematic diagram of the article processing apparatus provided in this embodiment of the present application, as shown in fig. 15, the article processing apparatus provided in this embodiment of the present application includes:
a first presentation module 5551, configured to present content of an article in a view interface in response to a content slide operation for the article;
a second presentation module 5552, configured to present media indication information of a target text content among the contents when the target text content is at the view interface;
wherein the media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content;
a playing module 5553, configured to play the media content when a trigger operation for the media indication information is received.
In some embodiments, prior to presenting the content of the article in the view interface, the apparatus further comprises:
the adjusting module is used for presenting the presentation mode functional items corresponding to the content of the article;
in response to a triggering operation for the presentation mode function item, adjusting a presentation mode of content of the article to a media presentation mode;
the second presentation module is further configured to present media indication information of a target text content when the target text content in the content is in the view interface and the presentation mode is a media presentation mode.
In some embodiments, the first presentation module is further configured to present, in response to the sliding operation when the content viewing operation is a sliding operation for the article, content in the article in a sliding state in a view interface; alternatively, the first and second electrodes may be,
and when the content viewing operation is a clicking operation for the article, responding to the clicking operation, turning pages of the article in a view interface, and presenting the content of the article after the pages are turned.
In some embodiments, the first presentation module is further configured to obtain a current article reading mode in response to a content viewing operation for an article;
and if the current article reading mode is the automatic reading mode, presenting the content corresponding to the current reading position in the view interface, and automatically updating the content of the presented article.
In some embodiments, the presentation module is further configured to present, when a target text content in the content is in a target area of the view interface, media indication information of the target text content at a presentation position of the target text content in the target area to replace the target text content.
In some embodiments, the rendering module is further configured to display a target text content in the content in a target display style when the target text content is in a target area of the view interface, and
presenting media indication information of the target text content in a media display area in the view interface;
the target display style is used for distinguishing from other contents except the target text contents in the view interface.
In some embodiments, when the media content is a video content, the playing module is further configured to present a playing window suspended in the view interface, and play the video content through the playing window.
In some embodiments, the playing module is further configured to perform a page jump to the detail page indicated by the media indication information;
and playing the media content in the detail page.
In some embodiments, the playing module is further configured to obtain a content scene corresponding to the target text content;
and presenting a background image matched with the content scene in a playing interface of the media content, and playing the media content based on the playing interface presenting the background image.
In some embodiments, the playing module is further configured to play the media content during the process of viewing the content of the article, and
and when the target text content is not presented on the view interface and the media content is not played completely, stopping playing the media content.
In some embodiments, before presenting the content of the article in the view interface, the method further comprises:
an obtaining module, configured to send an obtaining request for content of the article, where the obtaining request carries an article identifier of the article, and the article identifier is used for finding the content of the article and media content corresponding to target text content in the content;
receiving the content of the article and the media content corresponding to the target text content in the content;
and performing material replacement to generate the media indication information of the target text content based on the media content.
In some embodiments, the apparatus further comprises:
the screening module is used for carrying out target symbol detection on the content of the article, and the target symbol is used for indicating the text content of a target scene;
when the target symbol is detected to exist in the content of the article, extracting the text content of the target scene indicated by the target symbol as the target text content in the content.
In some embodiments, the screening module is further configured to perform semantic analysis on the content of the article to obtain an analysis result;
and extracting text content conforming to the target scene from the content as the target text content based on the analysis result.
In some embodiments, the screening module is further configured to extract text content of a target scene from the content of the article to obtain at least two target text segments;
respectively acquiring scene categories corresponding to the target text segments, and acquiring the interest degrees of target users for the scene categories;
based on each interestingness, performing descending sequencing on the target text segments to obtain corresponding target text segment sequences;
and selecting target text segments with a target quantity from the first target text segment in the target text segment sequence as the target text content.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the article processing method provided by the embodiment of the application when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the article processing method described above in the embodiments of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to execute the article processing method provided by the embodiments of the present application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method for processing articles, the method comprising:
in response to a content viewing operation for an article, presenting content of the article in a view interface;
when target text content in the content is at the view interface, presenting media indication information of the target text content;
wherein the media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content;
and when a trigger operation aiming at the media indication information is received, playing the media content.
2. The method of claim 1, wherein prior to presenting the content of the article in the view interface, the method further comprises:
presenting a presentation mode function item corresponding to the content of the article;
in response to a triggering operation for the presentation mode function item, adjusting a presentation mode of content of the article to a media presentation mode;
when the target text content in the content is in the view interface, presenting media indication information of the target text content, including:
and when the target text content in the content is in the view interface and the presentation mode is a media presentation mode, presenting the media indication information of the target text content.
3. The method of claim 1, wherein presenting media indication information of a target text content of the content when the target text content is at the view interface comprises:
when the target text content in the content is in the target area of the view interface, presenting the media indication information of the target text content at the presentation position of the target text content in the target area to replace the target text content.
4. The method of claim 1, wherein presenting media indication information of a target text content of the content when the target text content is at the view interface comprises:
when the target text content in the content is in the target area of the view interface, adopting a target display style to display the target text content, and displaying the target text content
Presenting media indication information of the target text content in a media display area in the view interface;
the target display style is used for distinguishing from other contents except the target text contents in the view interface.
5. The method of claim 1, wherein when the media content is video content, the playing the media content comprises:
and displaying a playing window suspended in the view interface, and playing the video content through the playing window.
6. The method of claim 1, wherein the playing the media content comprises:
performing page jumping to a detail page indicated by the media indication information;
and playing the media content in the detail page.
7. The method of claim 1, wherein the playing the media content comprises:
acquiring a content scene corresponding to the target text content;
and presenting a background image matched with the content scene in a playing interface of the media content, and playing the media content based on the playing interface presenting the background image.
8. The method of claim 1, wherein the playing the media content comprises:
in the process of viewing the content of the article, playing the media content and
and when the target text content is not presented on the view interface and the media content is not played completely, stopping playing the media content.
9. The method of claim 1, wherein prior to presenting the content of the article in the view interface, the method further comprises:
sending an acquisition request aiming at the content of the article, wherein the acquisition request carries an article identifier of the article, and the article identifier is used for searching the content of the article and media content corresponding to target text content in the content;
receiving the content of the article and the media content corresponding to the target text content in the content;
and performing material replacement to generate the media indication information of the target text content based on the media content.
10. The method of claim 1, wherein the method further comprises:
carrying out target symbol detection on the content of the article, wherein the target symbol is used for indicating the text content of a target scene;
and when the target symbol is detected to exist in the content of the article, extracting the text content of the target scene indicated by the target symbol as the target text content.
11. The method of claim 1, wherein the method further comprises:
performing semantic analysis on the content of the article to obtain an analysis result;
and extracting text content conforming to the target scene from the content as the target text content based on the analysis result.
12. The method of claim 1, wherein the method further comprises:
extracting text content of a target scene from the content of the article to obtain at least two target text segments;
respectively acquiring scene categories corresponding to the target text segments, and acquiring the interest degrees of target users for the scene categories;
based on each interestingness, performing descending sequencing on the target text segments to obtain corresponding target text segment sequences;
and selecting target text segments with a target quantity from the first target text segment in the target text segment sequence as the target text content.
13. An article processing apparatus, comprising:
the first presentation module is used for responding to content viewing operation aiming at the article and presenting the content of the article in a view interface;
the second presentation module is used for presenting the media indication information of the target text content when the target text content in the content is in the view interface;
wherein the media indication information is used for indicating that at least one of the following media contents corresponding to the target text content exists: audio content and video content;
and the playing module is used for playing the media content when receiving the triggering operation aiming at the media indication information.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the article processing method of any one of claims 1 to 12 when executing the executable instructions stored in the memory.
15. A computer-readable storage medium having stored thereon executable instructions for, when executed by a processor, implementing the article processing method of any one of claims 1 to 12.
CN202011229208.2A 2020-11-06 2020-11-06 Article processing method, article processing device, article processing equipment and computer readable storage medium Pending CN112230838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011229208.2A CN112230838A (en) 2020-11-06 2020-11-06 Article processing method, article processing device, article processing equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011229208.2A CN112230838A (en) 2020-11-06 2020-11-06 Article processing method, article processing device, article processing equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112230838A true CN112230838A (en) 2021-01-15

Family

ID=74123334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011229208.2A Pending CN112230838A (en) 2020-11-06 2020-11-06 Article processing method, article processing device, article processing equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112230838A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282784A (en) * 2021-06-03 2021-08-20 北京得间科技有限公司 Audio recommendation method, computing device and computer storage medium for dialog novel
CN113778307A (en) * 2021-09-27 2021-12-10 口碑(上海)信息技术有限公司 Information interaction method and device
CN115079873A (en) * 2021-03-01 2022-09-20 北京字跳网络技术有限公司 Information display method and device, electronic equipment and storage medium
CN115134648A (en) * 2021-03-26 2022-09-30 腾讯科技(深圳)有限公司 Video playing method, device, equipment and computer readable storage medium
CN115426533A (en) * 2022-07-22 2022-12-02 北京多氪信息科技有限公司 Audio data playing method and device and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115079873A (en) * 2021-03-01 2022-09-20 北京字跳网络技术有限公司 Information display method and device, electronic equipment and storage medium
CN115079873B (en) * 2021-03-01 2024-03-26 北京字跳网络技术有限公司 Information display method, information display device, electronic equipment and storage medium
CN115134648A (en) * 2021-03-26 2022-09-30 腾讯科技(深圳)有限公司 Video playing method, device, equipment and computer readable storage medium
CN113282784A (en) * 2021-06-03 2021-08-20 北京得间科技有限公司 Audio recommendation method, computing device and computer storage medium for dialog novel
CN113778307A (en) * 2021-09-27 2021-12-10 口碑(上海)信息技术有限公司 Information interaction method and device
CN113778307B (en) * 2021-09-27 2023-09-19 口碑(上海)信息技术有限公司 Information interaction method and device
CN115426533A (en) * 2022-07-22 2022-12-02 北京多氪信息科技有限公司 Audio data playing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112230838A (en) Article processing method, article processing device, article processing equipment and computer readable storage medium
CN111935554B (en) Live information processing method, device, equipment and computer readable storage medium
CN102027473B (en) Method and system for media access by tag cloud
WO2022166579A1 (en) Information presentation method and apparatus, and computer storage medium
US10088983B1 (en) Management of content versions
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
WO2008001350A2 (en) Method and system of providing a personalized performance
CN101826096B (en) Information display method, device and system based on mouse pointing
CN111970577A (en) Subtitle editing method and device and electronic equipment
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN114095749B (en) Recommendation and live interface display method, computer storage medium and program product
CN115082602B (en) Method for generating digital person, training method, training device, training equipment and training medium for model
CN107657024B (en) Search result display method, device, equipment and storage medium
CN114339285B (en) Knowledge point processing method, video processing method, device and electronic equipment
CN115237301B (en) Method and device for processing barrage in interactive novel
CN108449255B (en) Comment interaction method and equipment, client device and electronic equipment
CN114564666A (en) Encyclopedic information display method, encyclopedic information display device, encyclopedic information display equipment and encyclopedic information display medium
US20240103697A1 (en) Video display method and apparatus, and computer device and storage medium
CN108491178B (en) Information browsing method, browser and server
CN113973223A (en) Data processing method, data processing device, computer equipment and storage medium
CN112835860B (en) Shared document processing method, device, equipment and computer readable storage medium
CN106406882A (en) Method and device for displaying post background in forum
CN113886610A (en) Information display method, information processing method and device
CN113342221A (en) Comment information guiding method and device, storage medium and electronic equipment
CN111914193A (en) Method, device and equipment for processing media information and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination