CN112000256A - Content interaction method and device - Google Patents

Content interaction method and device Download PDF

Info

Publication number
CN112000256A
CN112000256A CN202010859655.XA CN202010859655A CN112000256A CN 112000256 A CN112000256 A CN 112000256A CN 202010859655 A CN202010859655 A CN 202010859655A CN 112000256 A CN112000256 A CN 112000256A
Authority
CN
China
Prior art keywords
information
target
voice interaction
content
interaction information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010859655.XA
Other languages
Chinese (zh)
Other versions
CN112000256B (en
Inventor
郭瑄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010859655.XA priority Critical patent/CN112000256B/en
Publication of CN112000256A publication Critical patent/CN112000256A/en
Application granted granted Critical
Publication of CN112000256B publication Critical patent/CN112000256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Abstract

The invention discloses a content interaction method and a content interaction device; displaying a content detail page of the client, wherein the content detail page comprises detail information of target reading content; responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control; and responding to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content. According to the scheme, the voice interaction information is marked at the corresponding position of the reading content in the process of browsing the reading content by the user, so that the real-time interaction of the user on the reading content is realized, and the interestingness of content reading is further enhanced.

Description

Content interaction method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a content interaction method and apparatus.
Background
In recent years, in a terminal information browsing scene, when a user reads interactive content, the user can mark own preference and idea to the content through praise and comment.
In the research and practice process of the related technology, the inventor finds that when the user reads the content for interaction, the user can like to read the content and comment the content only when the reading is finished, the user has less close interaction with the reading content, the interestingness is not strong, and the requirement of the user for interaction when the user reads the content is difficult to meet.
Disclosure of Invention
The embodiment of the application provides a content interaction method and device, which can realize real-time interaction between a user and read content, and further enhance the interestingness of content reading.
The embodiment of the application provides a content interaction method, which comprises the following steps:
displaying a content detail page of a client, wherein the content detail page comprises detail information of target reading content;
responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control;
and responding to a voice interaction information acquisition operation aiming at a target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of a second terminal browses the target information of the target reading content.
Correspondingly, the embodiment of the application also provides another content interaction method, which comprises the following steps:
displaying a content detail page of a client, wherein the content detail page comprises detail information of the target reading content, and target information in the target reading content is provided with corresponding shared voice interaction information;
determining the display position of the target information in the target reading content on the terminal screen;
and acquiring a reading position of a user on a terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets a preset requirement.
Correspondingly, the embodiment of the present application provides a content interaction device, including:
the first detail page display unit is used for displaying a content detail page of the client, and the content detail page comprises detail information of target reading content;
the information acquisition control display unit is used for responding to the interactive operation aiming at the target information in the target reading content and displaying at least one voice interactive information acquisition control;
the first obtaining unit is used for responding to voice interaction information obtaining operation aiming at a target voice interaction information obtaining control, obtaining target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of a second terminal browses the target information of the target reading content.
In an embodiment, the information obtaining control displaying unit includes:
the control list display subunit is used for responding to the interaction operation aiming at the target information in the target reading content, and displaying an operation control list aiming at the target information on the content detail page, wherein the operation control list comprises a voice interaction information adding control;
and the control display subunit is used for responding to the triggering operation of the voice interaction information adding control and displaying at least one voice interaction information obtaining control.
In an embodiment, the first obtaining unit includes:
the first obtaining subunit is configured to, in response to a voice interaction information obtaining operation for a target voice interaction information selection control, obtain voice interaction information corresponding to preset voice content of the target voice interaction information selection control, and use the obtained voice interaction information as target voice interaction information of the target information.
In an embodiment, the first obtaining subunit is further configured to respond to a selection operation for a target voice interaction information selection control, and display a sound effect selection page of preset voice content corresponding to the target voice interaction information selection control, where the sound effect selection page includes at least two sound effect selection controls of the preset voice content, and one sound effect selection control corresponds to voice interaction information of the preset voice content under one sound effect; and responding to the selected operation aiming at the target sound effect selection control, and acquiring the voice interaction information corresponding to the target sound effect selection control as the target voice interaction information of the target information.
In an embodiment, the first obtaining unit includes:
the first acquisition subunit is used for responding to the voice interaction information recording operation aiming at the voice interaction information recording control, acquiring the voice information of a user and displaying a user-defined voice acquisition page;
the voice input subunit is used for taking the collected voice information as user-defined voice interaction information when the voice input operation of the user is finished;
an adding control display subunit, configured to display a custom voice adding control corresponding to the custom voice interaction information on the custom voice acquisition page;
and the first selected operation subunit is used for responding to the selected operation aiming at the user-defined voice adding control, and taking the user-defined voice interaction information corresponding to the user-defined voice adding control selected by the user as the target voice interaction information of the target information.
In an embodiment, the first obtaining unit includes:
and the auditing subunit is used for auditing the content of the target voice interaction information, and setting the voice interaction information as the shared voice interaction information corresponding to the target information in the target reading content after the content auditing is passed.
In an embodiment, the auditing subunit is further configured to send the content identification information of the target reading content, the content positioning information of the target information, and the target voice interaction information to a server corresponding to the client, so as to trigger the server to audit the target voice interaction information, and when the target voice interaction information is audited, based on the content identification information of the target reading content, the target voice interaction information is used as the shared voice interaction information of the target information, and is added to the shared voice interaction information set of the target reading content in correspondence with the content positioning information of the target information.
In an embodiment, the auditing subunit is further configured to obtain content positioning information of the target information, where the content positioning information is used to determine the target information from the target reading content; determining the content contact ratio of the target information and the existing target information in the shared voice interaction information set of the target reading content; merging target information with content contact ratio exceeding a preset contact ratio threshold value and existing target information into new target information, and determining content positioning information of the new target information based on content positioning information of the target information and content positioning information of the existing target information; taking the shared voice interaction information of the target information and the existing target information as the shared voice interaction information of the new target information; and updating the corresponding relation between the content positioning information of the target information and the shared voice interaction information in the shared voice interaction information set based on the content positioning information of the new target information and the shared voice interaction information.
In one embodiment, the content interaction apparatus further includes:
the second acquisition subunit is used for responding to the recording operation aiming at the voice recording control and acquiring the voice information of the user;
and the voice interaction adding subunit is used for taking the collected voice information as the added voice interaction information when the voice input operation of the user is finished, and the sound effect selection page displays the sound effect selection control corresponding to the added voice interaction information.
In one embodiment, the content interaction apparatus further includes:
the second acquisition unit is used for responding to the voice interaction information playing operation aiming at the target information and acquiring the shared voice interaction information corresponding to the target information from the shared voice interaction information of the target reading content;
and the first playing unit is used for playing the shared voice interaction information corresponding to the target information.
Correspondingly, the embodiment of the present application further provides another content interaction apparatus, including:
the second detail page display unit is used for displaying a content detail page of the client, the content detail page comprises detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information;
the position determining unit is used for determining the display position of the target information in the target reading content on the terminal screen;
and the fourth acquisition unit is used for acquiring the reading position of the user on the terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
In an embodiment, the fourth obtaining unit includes:
the first playing subunit is used for playing the shared voice interaction information of the target information if the shared voice interaction information of the target information is one when the distance between the reading position and the display position meets the preset requirement;
the information list display subunit is configured to display a voice interaction information list at a position of the target information if the target information has a plurality of pieces of shared voice interaction information, where the voice interaction information list includes a plurality of voice interaction information playing controls, and one voice interaction information playing control corresponds to one piece of shared voice interaction information;
and the second playing subunit is used for responding to the playing operation aiming at the target voice interaction information playing control and playing the shared voice interaction information corresponding to the target voice interaction information playing control.
In one embodiment, the content interaction apparatus further includes:
a first setting unit, configured to set a play mode of the voice interaction information on the content detail page to an automatic play mode in response to a voice automatic play start operation of a voice interaction information automatic play setting control, and execute the step of determining a display position of the target information in the target reading content on the terminal screen;
and the second setting unit is used for setting the playing mode of the voice interaction information of the content detail page to be a non-automatic playing mode in response to the voice automatic playing closing operation of the voice interaction information automatic playing setting control, and the step of determining the display position of the target information in the target reading content on the terminal screen is not executed.
Accordingly, embodiments of the present application further provide a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes steps in the content interaction method provided in any of the embodiments of the present application.
Correspondingly, an embodiment of the present application further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform steps in any of the content interaction methods provided in the embodiments of the present application.
The content detail page of the client can be displayed, and the content detail page comprises detail information of target reading content; responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control; and responding to a voice interaction information acquisition operation aiming at a target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of a second terminal browses the target information of the target reading content. According to the scheme, the voice interaction information is marked at the corresponding position of the reading content in the process of browsing the reading content by the user, so that the real-time interaction of the user on the reading content is realized, and the interestingness of content reading is further enhanced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a scene schematic diagram of a content interaction method provided in an embodiment of the present application;
fig. 2a is a flowchart of a content interaction method provided in an embodiment of the present application;
FIG. 2b is a flowchart of an interactive page of a content interaction method according to an embodiment of the present disclosure;
fig. 2c is an interaction flowchart of a content interaction method provided in an embodiment of the present application;
fig. 2d is a table of a voice interaction information directory of a content interaction method according to an embodiment of the present application;
fig. 2e is a flowchart of examining and verifying voice interaction information of a content interaction method according to an embodiment of the present application;
fig. 2f is a speech interaction information re-page diagram of the content interaction method according to the embodiment of the present application;
fig. 3a is another flowchart of a content interaction method provided in an embodiment of the present application;
FIG. 3b is another interaction flowchart of a content interaction method provided in an embodiment of the present application;
FIG. 4 is another flow chart of a content interaction method provided by an embodiment of the present application;
FIG. 5a is a diagram of an apparatus for a content interaction method according to an embodiment of the present application;
FIG. 5b is a diagram of another apparatus for a content interaction method according to an embodiment of the present application;
FIG. 5c is a diagram of another apparatus for a content interaction method according to an embodiment of the present application;
FIG. 6a is a diagram of another apparatus for a content interaction method according to an embodiment of the present application;
FIG. 6b is a diagram of another apparatus for a content interaction method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a content interaction method and device, computer equipment and a storage medium. Specifically, the embodiment of the application provides a content interaction device suitable for a first computer device. Wherein (for the purpose of distinction may be called first content interaction means), payment configuration means adapted for the second computer device (for the purpose of distinction may be called second content interaction means). The first computer device and the second computer device may be devices such as a terminal or a server, and the terminal may be a device such as a mobile phone, a tablet computer, and a notebook computer. The server may be a single server or a server cluster composed of a plurality of servers.
Referring to fig. 1, taking the computer device as an example of a terminal, the terminal 10 may display a content detail page of a client, where the content detail page includes detail information of target reading content; responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control; and responding to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of the second terminal browses the target information of the target reading content.
The terminal 20 can display a content detail page of the client, where the content detail page includes detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information; determining the display position of target information in the target reading content on a terminal screen; and acquiring the reading position of the user on the terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
The step of collecting the voice information of the user in response to the recording operation of the voice recording control, the step of collecting the voice information of the user when the voice interaction information recording operation of the voice interaction information recording control is detected can be realized based on a voice technology in the field of artificial intelligence, and the step of determining the display position of the target information in the target reading content on the terminal screen can be realized based on a computer vision technology in the field of artificial intelligence.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine model controlled by a digital computer to extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best effect. The artificial intelligence technology is a comprehensive subject, relates to the field of extensive technology, and integrates the technology of hardware level and the technology of software level. The artificial intelligence software technology mainly comprises natural language processing, machine learning/deep learning and other directions.
Key technologies for Speech Technology (Speech Technology) are automatic Speech recognition Technology (ASR) and Speech synthesis Technology (TTS), as well as voiceprint recognition Technology. The computer can listen, see, speak and feel, and the development direction of the future human-computer interaction is provided, wherein the voice becomes one of the best viewed human-computer interaction modes in the future.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
For example, in the step of determining the display position of the target information in the target reading content on the terminal screen, the sight line position of the user can be obtained by performing human eye recognition through a computer vision technology, and then the reading position on the terminal screen corresponding to the sight line position is determined.
Therefore, in the embodiment of the application, the voice interaction information is marked at the position corresponding to the reading content in the process that the user browses the reading content, so that the real-time interaction of the user on the reading content is realized, and when the reading content is browsed again in the subsequent process, when the current reading position of the user is detected to be the position marked with the voice interaction information, the corresponding voice interaction information is played, so that the interestingness of content reading is enhanced.
The present embodiment can be described in detail below, and it should be noted that the following description of the embodiment is not intended to limit the preferred order of the embodiment.
The embodiment of the application provides a content interaction method, which can be executed by a terminal or a server, or can be executed by the terminal and the server together; the content interaction method is described as an example executed by a terminal, and specifically executed by a first content interaction device integrated in the terminal. As shown in fig. 2a, the specific flow of the content interaction method may be as follows:
201. and displaying a content detail page of the client, wherein the content detail page comprises detail information of the target reading content.
The content detail page is a page used for displaying detail information of target reading content in the client, the content detail page can be a page actively opened in the client by a user, and can also be a page obtained by opening a link shared by other associated users received by the user, for example, a user A receives a message link shared by a user B having an association relationship with the user A in the instant messaging client, and then the user A clicks the message link to display the content detail page of the client.
In an embodiment, referring to fig. 2b, the content detail page may show the detail information of the target reading content, and if the target reading content cannot be completely displayed on the current content detail page, the content may be shown through a sliding operation or other triggering operations, for example, when a sliding operation for the content detail page is detected, the detail information of the target reading content shown on the content detail page is triggered to be updated.
In an example, the detail information of the target reading content may be detailed image-text information of the target reading content for a user to read.
202. And responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control.
The voice interaction information acquisition control can enable a user to acquire voice information interacting with target information, one voice interaction information acquisition control can correspond to one preset voice content, and each preset voice content can comprise voice interaction information of a plurality of sound effects.
For example, referring to fig. 2b, in an embodiment, when an interactive operation for target information in the target reading content is detected, a plurality of voice interaction information acquisition controls are displayed, and each voice interaction information acquisition control is correspondingly provided with preset voice content, such as "java" (representing surprise), "haha" (representing smile), "kayah" (representing meditation), "yi? And when the selection operation of the voice interaction information acquisition control corresponding to the preset voice content showing 'Java', displaying pages corresponding to a plurality of 'Java' voice contents, and selecting one 'Java' sound effect as the voice interaction information of the target information.
The voice interaction information acquisition control further comprises a voice interaction information recording control, and the voice interaction information with the user-defined voice as the target information can be recorded.
In one embodiment, "displaying at least one voice interaction information acquisition control in response to an interaction operation for target information in the target reading content" may include:
responding to the interactive operation aiming at the target information in the target reading content, and displaying an operation control list aiming at the target information on a content detail page, wherein the operation control list comprises a voice interactive information adding control;
and responding to the triggering operation of adding the control aiming at the voice interaction information, and displaying at least one voice interaction information acquisition control.
The interactive operation can be to realize the interactive operation aiming at the target information by long-pressing the target information in the target reading content, and can also realize the interactive operation aiming at the target information by sliding and selecting the target in the target reading content, and the like.
203. And responding to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of the second terminal browses the target information of the target reading content.
The target voice interaction information is used for interacting with the target information, target voice interaction information with different sound effects of preset voice content can be preset, and target voice interaction information corresponding to the preset voice content can be recorded by a user.
In an embodiment, the acquiring of the voice interaction information control includes selecting controls of the voice interaction information, each selecting control of the voice interaction information corresponds to a preset voice content, and "acquiring the target voice interaction information for the target information in response to the acquiring of the voice interaction information for the target voice interaction information control" may include:
and responding to the voice interaction information acquisition operation aiming at the target voice interaction information selection control, acquiring voice interaction information corresponding to the preset voice content of the target voice interaction information selection control, and taking the acquired voice interaction information as the target voice interaction information of the target information.
For example, in an embodiment, referring to fig. 2c, taking a scene of an article read by a user at a client and a target reading content as an example, the content of the article may be displayed on a content detail page of the client, when the user reads the article, the user may press a certain text position in the article, for example, click the position with a finger, identify the selected text and pop up a text containing "magic sound" at the text position for the user to interact with the text, click "magic sound" to select an existing magic sound type, display existing sound type options and customizations in an action menu, for example, the sound type options may include "java" representing surprise, "haha" representing good smile, "kay" representing medius, and "yi" representing curiosity? And when the selection operation aiming at one sound type option is detected, selecting an audio resource category corresponding to the sound type option from the server, and determining interactive audio corresponding to the selected image-text from the audio resource category.
As shown in fig. 2d, the audio resource categories stored in the server may be a sound type option corresponding to an audio resource directory, such as directory 1, directory 2, directory 3, and so on, where audio 1-1, audio 1-2, audio 1-3, and so on are stored in directory 1, audio 2-2, audio 2-3, and so on are stored in directory 2, and audio 3-1, audio 3-2, audio 3-3, and so on are stored in directory 3.
After the existing magic sound type is selected, the audio stored in the server is not used as the interactive audio, but the voice of the user corresponding to the magic sound type is recorded as the interactive audio of the selected image and text, for example, the recording button can be pressed on a page with the existing magic sound type selected, and the voice of the user corresponding to the magic sound type is recorded as the alternative interactive audio.
And the user-defined voice of the user can be recorded and used as the interactive audio of the selected image and text.
In an embodiment, the detailed "responding to the voice interaction information obtaining operation for the target voice interaction information selection control, obtaining the voice interaction information corresponding to the preset voice content of the target voice interaction information selection control, and using the obtained voice interaction information as the target voice interaction information of the target information" may include:
responding to the selection operation aiming at the target voice interaction information selection control, and displaying a sound effect selection page of preset voice content corresponding to the target voice interaction information selection control, wherein the sound effect selection page comprises at least two sound effect selection controls of the preset voice content, and one sound effect selection control corresponds to the voice interaction information of the preset voice content under one sound effect;
and responding to the selected operation aiming at the target sound effect selection control, and acquiring the voice interaction information corresponding to the target sound effect selection control as the target voice interaction information of the target information.
The preset voice content can correspond to voice interaction information with a plurality of different sound effects, for example, when the preset voice content is 'Wa', default sound effects of 'Wa' with different sound colors are set in the system, for example, default sound effects of 'Wa' with piano sound colors, 'Wa' with violin sound colors, 'Wa' with children sound colors, 'Wa' with adult male sound colors, or 'Wa' with adult female sound colors are detected, and when the selection operation of the default sound effects of the 'Wa' with piano sound colors by a user is detected, the 'Wa' with piano sound colors is obtained to serve as target voice interaction information of the target information.
In an embodiment, the sound effect selection page further includes a voice recording control, and in response to a selection operation for the target sound effect selection control, the method for acquiring the voice interaction information corresponding to the target sound effect selection control may further include, before the target voice interaction information is used as the target information:
collecting voice information of a user in response to the recording operation aiming at the voice recording control;
and when the voice input operation of the user is finished, the collected voice information is used as newly-added voice interaction information, and a sound effect selection control corresponding to the newly-added voice interaction information is displayed on a sound effect selection page.
In an example, when it is detected that the voice input of the user is finished, the collected voice information is played, the user may delete the played voice information, record new voice information again, and use the played voice information as newly-added preset voice interaction information.
In an example, for the recording operation of the voice recording control, the user may perform voice recording for multiple times, collect voice information of each user, and when it is detected that the voice input of the user is finished, use the voice information collected each time as newly-added preset voice interaction information.
In an embodiment, after the preset voice interaction information of the target sound effect selection control is used as the target voice interaction information of the target information, the preset voice interaction information is sent to the server corresponding to the client, so that the server is triggered to audit the preset voice interaction information, and when the audit of the preset voice interaction information passes, the preset voice interaction information is used as the shared voice interaction information of the target information.
The shared voice interaction information is voice interaction information played by other users when reading operation is performed on target information, for example, the voice interaction information corresponding to the target information opened by other users in the content detail page, that is, the user performing reading and browsing operation on the target reading content can play the shared voice interaction information when reading and browsing operation on the target information is detected.
In an example, after the preset voice interaction information is sent to the server, the server may perform an audit on the preset voice interaction information, for example, referring to fig. 2e, the server may put the received preset voice interaction information into an audio pool of the user to perform an audit, for example, whether the number of segments and the number of texts of the preset voice interaction information match, for example, "java" satisfies that only one interruption or waveform step is low in a voice waveform, and the number of texts is 2, and the like, when the preset voice interaction information satisfies that only one interruption or waveform step is low in the voice waveform, and the number of texts is 2, the preset voice interaction information passes the audit, and then the preset voice interaction information that passes the audit can be performed manually, for example, whether the preset voice interaction information corresponds correctly may be manually audited, and if the preset voice interaction information corresponds correctly to "java", the server may be manually audited, "haha", "kayi", etc.
In an embodiment, the acquiring of the target voice interaction information of the target information in response to the voice interaction information acquiring operation of the target voice interaction information acquiring control may include:
responding to voice interaction information recording operation aiming at the voice interaction information recording control, acquiring voice information of a user, and displaying a user-defined voice acquisition page;
when the voice input operation of the user is finished, the collected voice information is used as the user-defined voice interaction information;
displaying a user-defined voice adding control corresponding to the user-defined voice interaction information on a user-defined voice acquisition page;
and responding to the selected operation aiming at the user-defined voice adding control, and taking the user-defined voice interaction information corresponding to the user-defined voice adding control selected by the user as the target voice interaction information of the target information.
In an embodiment, after the customized voice interaction information is used as the target voice interaction information of the target information, the customized voice interaction information is sent to a server corresponding to the client, so that the server is triggered to audit the customized voice interaction information, and when the customized voice interaction information is approved, the customized voice interaction information is used as the shared voice interaction information of the target information.
In an example, after sending the customized voice interaction information to the server, the server may audit the customized voice interaction information, for example, referring to fig. 2e, the server places the received customized voice interaction information into the user audio pool to audit, for example, whether the customized voice interaction information meets a preset audit condition is audited, when the customized voice interaction information meets the preset audit condition, the customized voice interaction information is audited to be passed, and then, the audited customized voice interaction information may be manually audited, for example, whether the customized voice interaction information has sensitive information or not may be manually audited.
In an example, the approved customized voice interaction information can be classified, and the high-quality customized voice interaction information is labeled and then stored in the corresponding audio resource storage area of the server as the voice interaction information preset by the system.
In an embodiment, "setting the target voice interaction information as shared voice interaction information corresponding to target information in the target reading content, where the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when the user of the second terminal browses the target information of the target reading content", may include:
and performing content verification on the target voice interaction information, and setting the voice interaction information as the shared voice interaction information corresponding to the target information in the target reading content after the verification is passed.
The target voice interaction information which is not checked can be played when the user interacts with the target information, the target voice interaction information which is checked can be played when the user interacts with the target information, and other users can also play when interacting with the target information.
In an embodiment, in detail, "performing content audit on the target voice interaction information, and after the audit is passed, setting the voice interaction information as shared voice interaction information corresponding to the target information in the target reading content" may include:
and sending the content identification information of the target reading content, the content positioning information of the target information and the target voice interaction information to a server corresponding to the client so as to trigger the server to audit the target voice interaction information, and when the target voice interaction information is audited, taking the target voice interaction information as the shared voice interaction information of the target information based on the content identification information of the target reading content, and correspondingly adding the target voice interaction information and the content positioning information of the target information into a shared voice interaction information set of the target reading content.
The shared voice interaction information set comprises target information and shared voice interaction information corresponding to the target information, a mapping relation between the target information and the shared voice interaction information can be established, according to the mapping relation, when reading operation of a user for the target information is detected, the shared voice interaction information corresponding to the target information is determined from the shared voice interaction information set, and the shared voice interaction information is played.
The content positioning information of the target information is any information that can determine the target information from the target reading content, and may include: the starting position and the ending position of the target information in the target reading content, or the target information itself.
In an embodiment, the step of adding the target information and the target voice interaction information to the shared voice interaction information set of the target reading content correspondingly includes:
acquiring content positioning information of the target information, wherein the content positioning information is used for determining the target information from the target reading content;
determining the content contact ratio of the target information and the existing target information in the shared voice interaction information set of the target reading content;
merging the target information with the content contact ratio exceeding a preset contact ratio threshold value and the existing target information into a new target information, and determining the content positioning information of the new target information based on the content positioning information of the target information and the content positioning information of the existing target information;
the shared voice interaction information of the target information and the existing target information is used as the shared voice interaction information of the new target information;
and updating the corresponding relation between the content positioning information of the target information and the shared voice interaction information in the shared voice interaction information set based on the content positioning information of the new target information and the shared voice interaction information.
The contact ratio refers to the repetition degree of the target information selection in the detail information of the target reading content, for example, the detail information of the target content is 'white-day-mountain-based, yellow river-incoming ocean current'. To cover thousands of miles and the next floor, the target information of the target reading content selected by the user a is ' yellow river inflow, and the target information of the target reading content selected by the user B is ' white day best mountain, yellow river inflow ', so that the contact ratio is ' yellow river inflow '.
In an embodiment, after the target voice interaction information for the target information is acquired in response to the voice interaction information acquisition operation for the target voice interaction information acquisition control, and the target voice interaction information is set as the shared voice interaction information corresponding to the target information in the target reading content, the method further includes:
responding to the voice interaction information playing operation aiming at the target information, and acquiring the shared voice interaction information corresponding to the target information from the shared voice interaction information of the target reading content;
and playing the shared voice interaction information corresponding to the target information.
If the target information corresponds to shared voice interaction information set by multiple users, multiple pieces of shared voice interaction information corresponding to the target information can be obtained from the shared voice interaction information set, and the multiple pieces of shared voice interaction information are played, for example, the multiple pieces of shared voice interaction information can be played in sequence according to time set by the shared voice interaction information, and the multiple pieces of shared voice interaction information can be played in sequence according to popularity of the multiple pieces of shared voice interaction information.
In an example, in the process of playing the multiple pieces of voice interaction information, the playing of the shared voice interaction information corresponding to the target information may be stopped through the playing stop control.
In an embodiment, after the target voice interaction information for the target information is acquired in response to the voice interaction information acquisition operation for the target voice interaction information acquisition control, and the target voice interaction information is set as the shared voice interaction information corresponding to the target information in the target reading content, the method further includes:
displaying voice interaction prompt information corresponding to the target information on the content detail page, wherein the voice interaction prompt information is used for indicating that the target information is set with the target voice interaction information;
responding to voice interaction information reset triggering operation aiming at the target information, and displaying a voice interaction information reset control, wherein the voice interaction information reset control comprises an information reset sub-control corresponding to the target voice interaction information existing in the target information;
responding to the triggering operation of resetting the sub-control for the target information, and displaying at least one voice interaction information acquisition control;
and responding to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquiring new target voice interaction information aiming at the target information, and updating the shared voice interaction information corresponding to the target information in the shared voice interaction information set by using the new target voice interaction information.
In an example, referring to fig. 2f, when a trigger operation for marking of target information on a content detail page is detected, a voice interaction information reset control as shown in fig. 2f may be displayed, the voice interaction information resetting control comprises a voice interaction information display area, the voice interaction information display area is used for displaying at least one target voice interaction information of the target information, the voice interaction information resetting control can also comprise an information resetting sub-control, wherein, the information resetting sub-control can be used for resetting the selected target voice interaction information, the voice interaction information resetting control can also comprise an information deleting sub-control, wherein, the information deleting sub-control can be used for deleting the selected target voice interaction information, the voice interaction information resetting control can also comprise an information adding sub-control, the information adding sub-control can be used for adding new target voice interaction information to the target information.
In an example, after determining the shared voice interaction message of the target information, a tag may be generated at a position of the target information of the content detail page to indicate the shared voice interaction information corresponding to the target information, a mapping relationship between the target information at the position and the shared voice interaction information may be stored in the server, and when a reading operation for the target information at the position is detected, the shared voice interaction information is played.
In the actual life, under the mobile terminal information reading scene, when a user browses the reading content, the user can mark own preference and idea of the reading content through praise and comment, and in such a way, the interaction and the reading scene are separated. According to the scheme, the user can indicate the favorite degree of the user to the reading content through the dimension of voice in the process of browsing the reading content, the user can read words in a visual mode and also can obtain information from an auditory mode in the browsing process, and the stimulation and perception to the user in the process of browsing the reading content are not only visual, but also the content consumption mode of a new dimension of auditory mode is increased.
Therefore, in the embodiment of the application, the voice interaction information can be marked at the corresponding position of the reading content in the process of browsing the reading content by the user, so that the real-time interaction of the user on the reading content is realized, and the interestingness of content reading is further enhanced.
The embodiment of the application provides a content interaction method, which can be executed by a terminal or a server, or can be executed by the terminal and the server together; the content interaction method in the embodiment of the present application is described as an example executed by a terminal, and specifically, executed by a second content interaction device integrated in the terminal. As shown in fig. 3a, the specific flow of the content interaction method may be as follows:
301. and displaying a content detail page of the client, wherein the content detail page comprises detail information of the target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information.
The detail information of the target reading content refers to specific content information of the target reading content, the detail information of the target reading content may include target information, at least one shared voice interaction information of one target information may be provided, a plurality of setting users of the shared voice interaction information may be provided, and the shared voice interaction information is voice information used by the users for interacting with the target information.
In an example, if the content detail page cannot completely display the detail information of the target reading content, all the detail information of the target reading content may be displayed in a scrolling manner according to the switching operation of the detail information displayed on the current content detail page.
In an embodiment, the shared voice interactive information of the target information may be existing interactive audio provided by the user through the selection system, for example, interactive audio of a preset voice content provided by the selection system, which is a sound color of a "wa", may be interactive audio of a voice content recorded by the user, or may be interactive audio of a voice of a user-defined content recorded by the user.
In an embodiment, before the step "displaying the content detail page of the client", the method may further include:
displaying a content aggregation page, wherein the content aggregation page comprises link information of reading content;
and responding to the triggering operation of the link information of the target reading content in the content aggregation page, and acquiring the detail information of the target reading content, the content positioning information of the target reading content and the shared voice interaction information of the target information.
The content aggregation page may include link messages of a plurality of reading contents, and the user may obtain detailed information of the target reading content.
The content positioning information of the target information and the shared voice interaction information of the target information may be acquired from the server, and the content positioning information of the target information and the shared voice interaction information of the target information may be acquired from the first terminal.
In an example, the obtained content positioning information of the target reading content and the shared voice interaction information of the target information are used for obtaining the target shared voice interaction information corresponding to the target information from the shared voice interaction information when the user browses the target information of the target content, so as to play the target shared voice interaction information corresponding to the target information.
302. And determining the display position of the target information in the target reading content on the terminal screen.
In an example, the display position of the target information on the terminal screen can be determined by acquiring the screen resolution and the size of the client corresponding to the terminal, determining the arrangement condition of the target reading content on the terminal based on the screen resolution and the size, and determining the display position of the target information on the terminal screen based on the arrangement condition.
In one embodiment, the display position of the target information on the terminal screen can be determined based on the content positioning information of the target information.
For example, in an embodiment, referring to fig. 3b, a scenario that a user reads an article at a client is taken as an example for explanation, content of the article may be displayed on a content detail page of the client, and when the user reads the article, and when the terminal detects that the user reads a segment with a magic sound, the terminal plays corresponding voice interaction information.
Wherein the user's gaze position (x) can be monitored in real time by using a camera1,y1) Then, obtaining the resolution and width and height of user screen, calculating the composition of article on the terminal screen, converting the segment with magic sound into the position (x) on the terminal screen2,y2) When (x)1,y1)=(x2,y2) Then, the voice interaction corresponding to the segment of the magic sound is obtained from the serverFor example, if the voice interaction information corresponding to the segment of the magic sound is an audio resource corresponding to "audio 1-2", the audio resource corresponding to "audio 1-2" is obtained from the server, and the audio resource corresponding to "audio 1-2" is played.
303. And acquiring the reading position of the user on the terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
The preset requirement may be that the current reading position and the display position are at the same position on the terminal screen, or that the current reading position and the display position are smaller than a preset distance, and so on.
For example, the shared voice interaction information of the target information may be played when the distance between the reading position and the display position is the same position of the terminal screen, or the shared voice interaction information of the target information may be played when the distance between the current reading position and the display position is smaller than a preset distance, for example, smaller than a preset distance of 5 character elements.
In an embodiment, the "acquiring a reading position of a user on a terminal screen, and playing the shared voice interaction information corresponding to the target information when a distance between the reading position and a display position meets a preset requirement" may include:
when the distance between the reading position and the display position meets the preset requirement, if the shared voice interaction information of the target information is one, the shared voice interaction information of the target information is played;
if the number of the shared voice interaction information of the target information is multiple, displaying a voice interaction information list at the position of the target information, wherein the voice interaction information list comprises a plurality of voice interaction information playing controls, and one voice interaction information playing control corresponds to one shared voice interaction information;
and responding to the playing operation aiming at the target voice interaction information playing control, and playing the shared voice interaction information corresponding to the shared voice interaction information playing control corresponding to the target information.
The voice interaction information list can also display content meaning summarizing information corresponding to each voice interaction information, and the favorite voice interaction information can be selected to be played according to the content meaning summarizing information.
In an embodiment, the content detail page further includes a voice interaction information automatic playing setting control, and before determining a display position of target information in the target reading content on the terminal screen, the method further includes:
responding to the voice automatic playing starting operation of the voice interactive information automatic playing setting control, setting the playing mode of the voice interactive information of the content detail page as an automatic playing mode, and executing the step of determining the display position of the target information in the target reading content on the terminal screen;
and responding to the voice automatic playing closing operation of the voice interactive information automatic playing setting control, setting the playing mode of the voice interactive information of the content detail page to be a non-automatic playing mode, and not executing the step of determining the display position of the target information in the target reading content on the terminal screen.
In an example, when it is detected that the user performs a trigger operation on the target information, the voice interaction information automatic playing setting control can be displayed, in response to a voice automatic playing starting operation on the voice interaction information automatic playing setting control, the playing mode of the voice interaction information of the content detail page is set to be an automatic playing mode, the voice interaction information corresponding to the target information is played, and the reading position of the user starts to be monitored in real time.
In an embodiment, the content detail page further includes a voice playing control set corresponding to each target information, a reading position of the user on the terminal screen is obtained, and when a distance between the reading position and the display position meets a preset requirement, after the shared voice interaction information corresponding to the target information is played, the method further includes:
and stopping playing the voice interaction information of the target information in response to the playing stopping operation aiming at the voice playing control.
In an example, after the voice interaction information of the target information is stopped being played, when a playback resuming operation for the voice playback control is detected, the voice interaction information of the target information is resumed.
In one embodiment, the voice interaction information of the target information may be voice recommendation information for recommending content that may be of interest to the user, such as real-time information, popular movies, and the like.
In an embodiment, a voice guidance language can be added to the learning manual of each novice education scene to serve as voice interaction information, so that a user can be better guided to learn.
Therefore, when the user browses the reading content marked with the voice interaction information and the current reading position of the user is detected to be the position marked with the voice interaction information, the corresponding voice interaction information is played, the real-time interaction of the user on the reading content can be realized, and the interestingness of content reading is further enhanced.
Based on the above description, the content interaction method of the present application will be further described below by way of example. Referring to fig. 4, a content interaction method may specifically include the following processes:
401. and displaying a content detail page of the client at the first terminal, wherein the content detail page comprises detail information of the target reading content.
In an embodiment, when a user reads target reading content, the user may specifically perform reading operation on a content detail page, and the detail information of the target reading content may include image content, audio content, and other information.
402. And the first terminal responds to the interactive operation aiming at the target information in the target reading content and displays at least one voice interactive information acquisition control.
In an embodiment, after the user presses a long button on the content detail page to call out an operation option menu of the current position, after the user a clicks and selects the magic sound function, the system pops up options representing magic sounds of different content types, the user selects one of the content types (such as 'java'), can enter an adding interface corresponding to the content type, the system is internally provided with 'java' default sound effects of several different sound colors, and the user a can directly select the default sound effects or select a recording and generate a sound effect recorded by the user after pressing the recording of the microphone, and then the sound effect can be added to the page.
403. The first terminal responds to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquires target voice interaction information aiming at the target information, and sets the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content.
The audio recorded by the user, the audio which is not audited, can be played by the user, the audited audio can be used as public audio for the user to play with other users, the auditing mode of the audio can be audited by the server and manually audited, for example, the server can audit whether the number of syllables and the number of characters of the recorded audio meet the preset requirements, the server can audit whether the recorded audio has sensitive information or not manually, the audited audio can be classified, the high-quality audio is labeled, and the labeled audio can be used as new voice interaction information preset by the system.
The target reading content and the shared voice interaction information set of the target reading content can be stored in a server corresponding to the client, the target reading content and the shared voice interaction information set of the target reading content can also be stored in the first terminal, and when an acquisition request initiated by the second terminal aiming at the target reading content and the shared voice interaction information set of the target reading content is detected, the target reading content and the shared voice interaction information set of the target reading content are sent to the second terminal.
404. And displaying a content aggregation page at the second terminal, wherein the content aggregation page comprises link information of the reading content.
The content aggregation page and the content detail page may be different from each other in client side, and the content aggregation page may further include other information, for example, sharing link information of other content, and the like.
405. The second terminal responds to the triggering operation of the link information of the target reading content in the content aggregation page, and acquires the detail information of the target reading content, the content positioning information of the target reading content and the shared voice interaction information of the target information.
The second terminal obtains the detail information of the target reading content, the content positioning information of the target reading content and the shared voice interaction information of the target information, and the second terminal is used for determining the corresponding target shared voice interaction information from the shared voice interaction information and playing the corresponding target voice interaction information when a user of the second terminal browses and reads the target reading content.
It is apparent that a content aggregation page including link information of the read content may also be displayed on other terminals.
406. And displaying a content detail page of the client at the second terminal, wherein the content detail page comprises detail information of the target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information.
In an embodiment, a user B opens a text content detail page with a voice interaction information identifier (shared by the user a), starts reading an article, positions a reading position of the user through a human eye recognition function on a mobile device, and triggers to play a voice identifier when a sight of the user reaches a text segment with a voice identifier and nearby, so that the user B can just receive the interaction voice of the user a in the corresponding text segment.
The user B views the graphics context with the voice identifier a, which may be shared by the user a, but is not limited to this.
407. The second terminal determines the display position of the target information in the target reading content on the terminal screen.
In an embodiment, when reading the image-text content identified by the voice interaction information, the user may monitor the image-text content in real time through a machine, for example, the camera of the second terminal may monitor the sight position of the user in real time, and when monitoring that the user browses the position identified by the voice interaction information, play the corresponding shared voice interaction information.
408. And the second terminal acquires the reading position of the user on the terminal screen, and plays the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
In an example, a certain distance difference may exist between the reading position and the display position, and when it is detected that the distance difference between the reading position and the display position is a preset distance, the shared voice interaction information of the target information is played.
In an embodiment, in the process of playing the shared voice interaction information of the target information, when it is detected that the distance between the current reading position and the display position of the user is greater than the preset distance, the playing of the shared voice interaction information of the target information is stopped.
In the existing process of browsing reading contents, users interact with the reading contents through praise and comment, but the praise and comment are only image information and do not have richer image-text content identification forms, and the praise and comment generally occur before or after the reading contents are browsed, are staggered on a time line, and browsing and interaction cannot be carried out simultaneously. Some wonderful places may be forgotten, and the interaction in the comment area is difficult to generate the resonance with surprise. And through this application for read the content provide the speech recognition mode beyond the characters image sign, give the new reading experience of user from the sense of hearing, can realize interdynamic and content browse and go on simultaneously, let interdynamic more timely, nature.
Therefore, in the embodiment of the application, the voice interaction information is marked at the position corresponding to the reading content in the process that the user browses the reading content, so that the real-time interaction of the user on the reading content is realized, and when the reading content is browsed again in the subsequent process, when the current reading position of the user is detected to be the position marked with the voice interaction information, the corresponding voice interaction information is played, so that the interestingness of content reading is enhanced.
In order to better implement the above method, correspondingly, the embodiment of the present application further provides a content interaction apparatus (i.e. a first content interaction apparatus), wherein the first content interaction apparatus may be specifically integrated in a terminal, and referring to fig. 5a, the content interaction apparatus may include a first detail page display unit 501, an information acquisition control display unit 502, and a first acquisition unit 503, as follows:
(1) a first details page display unit 501;
a first detail page display unit 501, configured to display a content detail page of the client, where the content detail page includes detail information of the target reading content.
(2) An information acquisition control display unit 502;
an information obtaining control display unit 502, configured to respond to an interaction operation for target information in the target reading content, and display at least one voice interaction information obtaining control.
In an embodiment, as shown in fig. 5b, the information obtaining control displaying unit 502 includes:
a control list display subunit 5021, configured to display an operation control list for the target information on the content detail page in response to an interaction operation for the target information in the target reading content, where the operation control list includes a voice interaction information addition control;
and the control display subunit 5022 is configured to display at least one voice interaction information acquisition control in response to a trigger operation for adding a control to voice interaction information.
(3) A first acquisition unit 503;
the first obtaining unit 503 is configured to, in response to a voice interaction information obtaining operation for the target voice interaction information obtaining control, obtain target voice interaction information for target information, and set the target voice interaction information as shared voice interaction information corresponding to the target information in target reading content, where the shared voice interaction information of the target reading content is used to play the shared voice interaction information corresponding to the target information when a user of the second terminal browses the target information of the target reading content.
In an embodiment, as shown in fig. 5c, the first obtaining unit 503 includes:
the first obtaining sub-unit 5031, configured to, in response to a voice interaction information obtaining operation for the target voice interaction information selection control, obtain voice interaction information corresponding to preset voice content of the target voice interaction information selection control, and use the obtained voice interaction information as target voice interaction information of the target information.
In an embodiment, the first obtaining subunit 5031 is further configured to, in response to a selection operation on the target voice interaction information selection control, display a sound effect selection page of preset voice content corresponding to the target voice interaction information selection control, where the sound effect selection page includes at least two sound effect selection controls of the preset voice content, and one sound effect selection control corresponds to voice interaction information of the preset voice content under one sound effect; and responding to the selected operation aiming at the target sound effect selection control, and acquiring the voice interaction information corresponding to the target sound effect selection control as the target voice interaction information of the target information.
In an embodiment, as shown in fig. 5c, the first obtaining unit 503 includes:
a first collecting subunit 5032, configured to collect, in response to a voice interaction information recording operation for the voice interaction information recording control, voice information of a user, and display a custom voice obtaining page;
a voice input sub-unit 5033 configured to take the collected voice information as the customized voice interaction information when the voice input operation of the user is finished;
an add control display subunit 5034, configured to display a custom voice add control corresponding to the custom voice interaction information on the custom voice acquisition page;
the first selecting operation subunit 5035 is configured to, in response to the selecting operation for the customized voice adding control, take the customized voice interaction information corresponding to the customized voice adding control selected by the user as the target voice interaction information of the target information.
In an embodiment, as shown in fig. 5c, the first obtaining unit 503 includes:
the auditing subunit 5036 is configured to perform content auditing on the target voice interaction information, and after the auditing is passed, set the voice interaction information as shared voice interaction information corresponding to the target information in the target reading content.
In an embodiment, the auditing subunit 5036 is further configured to send the content identification information of the target reading content, the content positioning information of the target information, and the target voice interaction information to a server corresponding to the client, so as to trigger the server to audit the target voice interaction information, and when the target voice interaction information is audited, based on the content identification information of the target reading content, the target voice interaction information is used as the shared voice interaction information of the target information, and is added to the shared voice interaction information set of the target reading content in correspondence with the content positioning information of the target information.
In one embodiment, the auditing subunit 5036 is further configured to obtain content positioning information of the target information, where the content positioning information is used to determine the target information from the target reading content; determining the content contact ratio of the target information and the existing target information in the shared voice interaction information set of the target reading content; merging the target information with the content contact ratio exceeding a preset contact ratio threshold value and the existing target information into a new target information, and determining the content positioning information of the new target information based on the content positioning information of the target information and the content positioning information of the existing target information; the shared voice interaction information of the target information and the existing target information is used as the shared voice interaction information of the new target information; and updating the corresponding relation between the content positioning information of the target information and the shared voice interaction information in the shared voice interaction information set based on the content positioning information of the new target information and the shared voice interaction information.
In one embodiment, the content interaction apparatus further includes:
a second collecting subunit 504, configured to collect voice information of the user in response to a recording operation for the voice recording control;
the voice interaction adding subunit 505 is configured to, when the voice input operation of the user is finished, take the collected voice information as new voice interaction information, and display a sound effect selection control corresponding to the new voice interaction information on a sound effect selection page;
in one embodiment, the content interaction apparatus further includes:
a second obtaining unit 506, configured to, in response to a voice interaction information playing operation for the target information, obtain shared voice interaction information corresponding to the target information from the shared voice interaction information of the target reading content;
the first playing unit 507 is configured to play the shared voice interaction information corresponding to the target information.
As can be seen from the above, the first detail page display unit 501 of the first content interaction device in the embodiment of the present application displays the content detail page of the client, where the content detail page includes detail information of the target reading content; then, the information acquisition control display unit 502 responds to the interaction operation aiming at the target information in the target reading content to display at least one voice interaction information acquisition control; the first obtaining unit 503 obtains target voice interaction information for the target information in response to the voice interaction information obtaining operation for the target voice interaction information obtaining control, and sets the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, where the shared voice interaction information of the target reading content is used to play the shared voice interaction information corresponding to the target information when the user of the second terminal browses the target information of the target reading content. According to the scheme, the voice interaction information is marked at the corresponding position of the reading content in the process of browsing the reading content by the user, so that the real-time interaction of the user on the reading content is realized, and the interestingness of content reading is further enhanced.
In order to better implement the above method, correspondingly, the embodiment of the present application further provides another content interaction device (i.e. a second content interaction device), wherein the second content interaction device may be specifically integrated in the terminal, and referring to fig. 6a, the second content interaction device may include a second detail page display unit 601, a position determination unit 602, and a fourth obtaining unit 603, as follows:
(1) a second detail page display unit 601;
the second detail page display unit 601 is configured to display a content detail page of the client, where the content detail page includes detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information.
(2) A position determination unit 602;
a position determining unit 602, configured to determine a display position of target information in the target reading content on the terminal screen.
(3) A fourth acquisition unit 603;
a fourth obtaining unit 603, configured to obtain a reading position of the user on the terminal screen, and when a distance between the reading position and the display position meets a preset requirement, play shared voice interaction information corresponding to the target information.
In an embodiment, as shown in fig. 6b, the fourth obtaining unit 603 includes:
a first playing sub-unit 6031, configured to, when the distance between the reading position and the display position meets a preset requirement, if the shared voice interaction information of the target information is one, play the shared voice interaction information of the target information;
an information list display subunit 6032, configured to display, if there are multiple pieces of shared voice interaction information of the target information, a voice interaction information list at the position of the target information, where the voice interaction information list includes multiple voice interaction information playing controls, and one voice interaction information playing control corresponds to one piece of shared voice interaction information;
and a second playing sub-unit 6033, configured to, in response to a playing operation for the target voice interaction information playing control, play the shared voice interaction information corresponding to the shared voice interaction information playing control corresponding to the target information.
In one embodiment, the content interaction apparatus further includes:
a first setting unit 604, configured to set a play mode of the voice interaction information on the content detail page to an automatic play mode in response to a voice automatic play start operation for the voice interaction information automatic play setting control, and perform a step of determining a display position of target information in the target reading content on a terminal screen;
the second setting unit 605 is configured to set the play mode of the voice interaction information of the content detail page to the non-automatic play mode in response to a voice automatic play closing operation for the voice interaction information automatic play setting control, and not perform the step of determining the display position of the target information in the target reading content on the terminal screen.
As can be seen from the above, the second detail page display unit 601 of the second content interaction device in the embodiment of the present application displays the content detail page of the client, where the content detail page includes detail information of the target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information; then, the third obtaining unit 602 determines the display position of the target information in the target reading content on the terminal screen; the fourth obtaining unit 603 obtains the reading position of the user on the terminal screen, and plays the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement. According to the scheme, when the user browses the reading content marked with the voice interaction information, when the current reading position of the user is detected to be the position marked with the voice interaction information, the corresponding voice interaction information is played, the real-time interaction of the user on the reading content can be realized, and the interestingness of content reading is further enhanced.
In addition, an embodiment of the present application further provides a computer device, where the computer device may be a device such as a terminal or a server, and as shown in fig. 7, a schematic structural diagram of the computer device according to the embodiment of the present application is shown, specifically:
the computer device may include components such as a processor 701 of one or more processing cores, memory 702 of one or more storage media, a power supply 703, and an input unit 704. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 7 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the processor 701 is a control center of the computer apparatus, connects various parts of the entire computer apparatus using various interfaces and lines, and performs various functions of the computer apparatus and processes data by running or executing software programs and/or modules stored in the memory 702 and calling data stored in the memory 702, thereby monitoring the computer apparatus as a whole. Optionally, processor 701 may include one or more processing cores; preferably, the processor 701 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701.
The memory 702 may be used to store software programs and modules, and the processor 701 executes various functional applications and data processing by operating the software programs and modules stored in the memory 702. The memory 702 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 702 may also include a memory controller to provide the processor 701 with access to the memory 702.
The computer device further includes a power supply 703 for supplying power to the various components, and preferably, the power supply 703 is logically connected to the processor 701 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The power supply 703 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 704, the input unit 704 being operable to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 701 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 702 according to the following instructions, and the processor 701 runs the application program stored in the memory 702, thereby implementing various functions as follows:
displaying a content detail page of the client, wherein the content detail page comprises detail information of target reading content; responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control; and responding to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of the second terminal browses the target information of the target reading content.
Or
Displaying a content detail page of the client, wherein the content detail page comprises detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information; determining the display position of target information in the target reading content on a terminal screen; and acquiring the reading position of the user on the terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
Therefore, in the embodiment of the application, the voice interaction information is marked at the position corresponding to the reading content in the process that the user browses the reading content, so that the real-time interaction of the user on the reading content is realized, and when the reading content is browsed again in the subsequent process, when the current reading position of the user is detected to be the position marked with the voice interaction information, the corresponding voice interaction information is played, so that the interestingness of content reading is enhanced.
It will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by instructions or by instructions controlling associated hardware, and the instructions may be stored in a storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in any one of the content interaction methods provided in the present application. For example, the instructions may perform the steps of:
displaying a content detail page of the client, wherein the content detail page comprises detail information of target reading content; responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control; and responding to the voice interaction information acquisition operation aiming at the target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of the second terminal browses the target information of the target reading content.
Or
Displaying a content detail page of the client, wherein the content detail page comprises detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information; determining the display position of target information in the target reading content on a terminal screen; and acquiring the reading position of the user on the terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any content interaction method provided in the embodiments of the present application, beneficial effects that can be achieved by any content interaction method provided in the embodiments of the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.
According to an aspect of the application, there is provided, among other things, a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the content interaction method provided in the above summary and embodiments.
The content interaction method, device, computer device and storage medium provided by the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A content interaction method is applied to a first terminal, and comprises the following steps:
displaying a content detail page of a client, wherein the content detail page comprises detail information of target reading content;
responding to the interactive operation aiming at the target information in the target reading content, and displaying at least one voice interactive information acquisition control;
and responding to a voice interaction information acquisition operation aiming at a target voice interaction information acquisition control, acquiring target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of a second terminal browses the target information of the target reading content.
2. The method according to claim 1, wherein the voice interaction information acquisition control comprises voice interaction information selection controls, and each voice interaction information selection control corresponds to a preset voice content;
the acquiring of the target voice interaction information for the target information in response to the voice interaction information acquisition operation for the target voice interaction information acquisition control comprises:
and responding to voice interaction information acquisition operation aiming at the target voice interaction information selection control, acquiring voice interaction information corresponding to preset voice content of the target voice interaction information selection control, and taking the acquired voice interaction information as target voice interaction information of the target information.
3. The method according to claim 2, wherein the obtaining, in response to a voice interaction information obtaining operation for a target voice interaction information selection control, voice interaction information corresponding to preset voice content of the target voice interaction information selection control, and using the obtained voice interaction information as target voice interaction information of the target information comprises:
responding to the selection operation of a target voice interaction information selection control, and displaying a sound effect selection page of preset voice content corresponding to the target voice interaction information selection control, wherein the sound effect selection page comprises at least two sound effect selection controls of the preset voice content, and one sound effect selection control corresponds to voice interaction information of the preset voice content under one sound effect;
and responding to the selected operation aiming at the target sound effect selection control, and acquiring the voice interaction information corresponding to the target sound effect selection control as the target voice interaction information of the target information.
4. The method of claim 3 wherein the sound effect selection page further comprises a voice recording control;
the method comprises the following steps of responding to the selected operation aiming at the target sound effect selection control, acquiring the voice interaction information corresponding to the target sound effect selection control, and before the voice interaction information is used as the target voice interaction information of the target information, further comprising:
responding to the recording operation aiming at the voice recording control, and collecting voice information of a user;
and when the voice input operation of the user is finished, the collected voice information is used as newly-added voice interaction information, and the sound effect selection control corresponding to the newly-added voice interaction information is displayed on the sound effect selection page.
5. The method of claim 1, wherein the voice interaction information acquisition control comprises a voice interaction information recording control;
the acquiring of the target voice interaction information of the target information in response to the voice interaction information acquisition operation for the target voice interaction information acquisition control comprises:
responding to the voice interaction information recording operation aiming at the voice interaction information recording control, acquiring voice information of a user, and displaying a user-defined voice acquisition page;
when the voice input operation of the user is finished, the collected voice information is used as user-defined voice interaction information;
displaying a user-defined voice adding control corresponding to the user-defined voice interaction information on the user-defined voice obtaining page;
responding to the selected operation aiming at the user-defined voice adding control, and taking the user-defined voice interaction information corresponding to the user-defined voice adding control selected by the user as the target voice interaction information of the target information.
6. The method of claim 1, wherein the displaying at least one voice interaction information acquisition control in response to the interaction operation for the target information in the target reading content comprises:
responding to an interactive operation aiming at target information in the target reading content, and displaying an operation control list aiming at the target information on the content detail page, wherein the operation control list comprises a voice interactive information adding control;
and responding to the triggering operation of the voice interaction information adding control, and displaying at least one voice interaction information acquisition control.
7. The method according to claim 1, wherein the setting the target voice interaction information as the shared voice interaction information corresponding to the target information in the target reading content comprises:
and performing content verification on the target voice interaction information, and setting the voice interaction information as shared voice interaction information corresponding to the target information in the target reading content after the content verification is passed.
8. The method of claim 7, wherein the performing a content audit on the target voice interaction information, and after the content audit is passed, setting the voice interaction information as shared voice interaction information corresponding to the target information in the target reading content comprises:
and sending the content identification information of the target reading content, the content positioning information of the target information and the target voice interaction information to a server corresponding to the client to trigger the server to audit the target voice interaction information, and when the target voice interaction information is audited, taking the target voice interaction information as the shared voice interaction information of the target information on the basis of the content identification information of the target reading content, and correspondingly adding the target voice interaction information and the content positioning information of the target information to a shared voice interaction information set of the target reading content.
9. The method of claim 8, wherein after the setting the target voice interaction information as the shared voice interaction information corresponding to the target information in the target reading content, the method further comprises:
acquiring content positioning information of the target information, wherein the content positioning information is used for determining the target information from the target reading content;
determining the content contact ratio of the target information and the existing target information in the shared voice interaction information set of the target reading content;
merging target information with content contact ratio exceeding a preset contact ratio threshold value and existing target information into new target information, and determining content positioning information of the new target information based on content positioning information of the target information and content positioning information of the existing target information;
taking the shared voice interaction information of the target information and the existing target information as the shared voice interaction information of the new target information;
and updating the corresponding relation between the content positioning information of the target information and the shared voice interaction information in the shared voice interaction information set based on the content positioning information of the new target information and the shared voice interaction information.
10. The method of claim 1, wherein after the target voice interaction information for the target information is obtained in response to a voice interaction information obtaining operation for a target voice interaction information obtaining control, and the target voice interaction information is set as shared voice interaction information corresponding to the target information in the target reading content, the method further comprises:
responding to the voice interaction information playing operation aiming at the target information, and acquiring the shared voice interaction information corresponding to the target information from the shared voice interaction information of the target reading content;
and playing the shared voice interaction information corresponding to the target information.
11. A content interaction method is applied to a second terminal, and comprises the following steps:
displaying a content detail page of a client, wherein the content detail page comprises detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information;
determining the display position of the target information in the target reading content on the terminal screen;
and acquiring a reading position of a user on a terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets a preset requirement.
12. The method of claim 11, wherein the shared voice interaction information of one of the target information is at least one;
when the distance between the reading position and the display position meets a preset requirement, playing the shared voice interaction information corresponding to the target information, including:
when the distance between the reading position and the display position meets a preset requirement, if the shared voice interaction information of the target information is one, the shared voice interaction information of the target information is played;
if the number of the shared voice interaction information of the target information is multiple, displaying a voice interaction information list at the position of the target information, wherein the voice interaction information list comprises a plurality of voice interaction information playing controls, and one voice interaction information playing control corresponds to one shared voice interaction information;
and responding to the playing operation aiming at the target voice interaction information playing control, and playing the shared voice interaction information corresponding to the target voice interaction information playing control.
13. The method of claim 11, wherein the content detail page further comprises a voice interaction information auto-play setting control;
before determining the display position of the target information in the target reading content on the terminal screen, the method further comprises:
responding to the voice automatic playing starting operation of the voice interactive information automatic playing setting control, setting the playing mode of the voice interactive information of the content detail page as an automatic playing mode, and executing the step of determining the display position of the target information in the target reading content on the terminal screen;
and responding to the voice automatic playing closing operation of the voice interactive information automatic playing setting control, setting the playing mode of the voice interactive information of the content detail page to be a non-automatic playing mode, and not executing the step of determining the display position of the target information in the target reading content on the terminal screen.
14. A content interaction apparatus, comprising:
the first detail page display unit is used for displaying a content detail page of the client, and the content detail page comprises detail information of target reading content;
the information acquisition control display unit is used for responding to the interactive operation aiming at the target information in the target reading content and displaying at least one voice interactive information acquisition control;
the first obtaining unit is used for responding to voice interaction information obtaining operation aiming at a target voice interaction information obtaining control, obtaining target voice interaction information aiming at the target information, and setting the target voice interaction information as shared voice interaction information corresponding to the target information in the target reading content, wherein the shared voice interaction information of the target reading content is used for playing the shared voice interaction information corresponding to the target information when a user of a second terminal browses the target information of the target reading content.
15. A content interaction apparatus, comprising:
the second detail page display unit is used for displaying a content detail page of the client, the content detail page comprises detail information of target reading content, and the target information in the target reading content is provided with corresponding shared voice interaction information;
the position determining unit is used for determining the display position of the target information in the target reading content on the terminal screen;
and the fourth acquisition unit is used for acquiring the reading position of the user on the terminal screen, and playing the shared voice interaction information corresponding to the target information when the distance between the reading position and the display position meets the preset requirement.
CN202010859655.XA 2020-08-24 2020-08-24 Content interaction method and device Active CN112000256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010859655.XA CN112000256B (en) 2020-08-24 2020-08-24 Content interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010859655.XA CN112000256B (en) 2020-08-24 2020-08-24 Content interaction method and device

Publications (2)

Publication Number Publication Date
CN112000256A true CN112000256A (en) 2020-11-27
CN112000256B CN112000256B (en) 2023-10-27

Family

ID=73471450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010859655.XA Active CN112000256B (en) 2020-08-24 2020-08-24 Content interaction method and device

Country Status (1)

Country Link
CN (1) CN112000256B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050137872A1 (en) * 2003-12-23 2005-06-23 Brady Corey E. System and method for voice synthesis using an annotation system
CN101799994A (en) * 2010-02-10 2010-08-11 惠州Tcl移动通信有限公司 Voice note recording method of e-book reader
US20100324709A1 (en) * 2009-06-22 2010-12-23 Tree Of Life Publishing E-book reader with voice annotation
CN104679724A (en) * 2013-12-03 2015-06-03 腾讯科技(深圳)有限公司 Page noting method and device
CN104869467A (en) * 2015-03-26 2015-08-26 腾讯科技(北京)有限公司 Information output method and system for media playing, and apparatuses
US20170309277A1 (en) * 2014-02-28 2017-10-26 Comcast Cable Communications, Llc Voice Enabled Screen Reader
CN107332678A (en) * 2017-06-02 2017-11-07 深圳市华阅文化传媒有限公司 The method and system of reading page voice interface
CN107463247A (en) * 2016-06-06 2017-12-12 宇龙计算机通信科技(深圳)有限公司 A kind of method, apparatus and terminal of text reading processing
CN109474562A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 The display methods and device of mark, request response method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050137872A1 (en) * 2003-12-23 2005-06-23 Brady Corey E. System and method for voice synthesis using an annotation system
US20100324709A1 (en) * 2009-06-22 2010-12-23 Tree Of Life Publishing E-book reader with voice annotation
CN101799994A (en) * 2010-02-10 2010-08-11 惠州Tcl移动通信有限公司 Voice note recording method of e-book reader
CN104679724A (en) * 2013-12-03 2015-06-03 腾讯科技(深圳)有限公司 Page noting method and device
US20170309277A1 (en) * 2014-02-28 2017-10-26 Comcast Cable Communications, Llc Voice Enabled Screen Reader
CN104869467A (en) * 2015-03-26 2015-08-26 腾讯科技(北京)有限公司 Information output method and system for media playing, and apparatuses
CN107463247A (en) * 2016-06-06 2017-12-12 宇龙计算机通信科技(深圳)有限公司 A kind of method, apparatus and terminal of text reading processing
CN107332678A (en) * 2017-06-02 2017-11-07 深圳市华阅文化传媒有限公司 The method and system of reading page voice interface
CN109474562A (en) * 2017-09-07 2019-03-15 腾讯科技(深圳)有限公司 The display methods and device of mark, request response method and device

Also Published As

Publication number Publication date
CN112000256B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN109176535B (en) Interaction method and system based on intelligent robot
CN109643412A (en) Email is personalized
US20140351720A1 (en) Method, user terminal and server for information exchange in communications
CN112040263A (en) Video processing method, video playing method, video processing device, video playing device, storage medium and equipment
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
CN110598576A (en) Sign language interaction method and device and computer medium
CN112749956A (en) Information processing method, device and equipment
CN113392273A (en) Video playing method and device, computer equipment and storage medium
CN109286848B (en) Terminal video information interaction method and device and storage medium
EP4075411A1 (en) Device and method for providing interactive audience simulation
CN112287848A (en) Live broadcast-based image processing method and device, electronic equipment and storage medium
CN112423143A (en) Live broadcast message interaction method and device and storage medium
CN112860213B (en) Audio processing method and device, storage medium and electronic equipment
CN113205569A (en) Image drawing method and device, computer readable medium and electronic device
CN113573128A (en) Audio processing method, device, terminal and storage medium
US20230030502A1 (en) Information play control method and apparatus, electronic device, computer-readable storage medium and computer program product
CN112000256B (en) Content interaction method and device
CN116088675A (en) Virtual image interaction method, related device, equipment, system and medium
CN112533009B (en) User interaction method, system, storage medium and terminal equipment
WO2022180860A1 (en) Video session evaluation terminal, video session evaluation system, and video session evaluation program
CN114095782A (en) Video processing method and device, computer equipment and storage medium
CN111783587A (en) Interaction method, device and storage medium
CN115334367B (en) Method, device, server and storage medium for generating abstract information of video
CN115225930B (en) Live interaction application processing method and device, electronic equipment and storage medium
CN112752159B (en) Interaction method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant