CN111259181B - Method and device for displaying information and providing information - Google Patents

Method and device for displaying information and providing information Download PDF

Info

Publication number
CN111259181B
CN111259181B CN201811468336.5A CN201811468336A CN111259181B CN 111259181 B CN111259181 B CN 111259181B CN 201811468336 A CN201811468336 A CN 201811468336A CN 111259181 B CN111259181 B CN 111259181B
Authority
CN
China
Prior art keywords
dubbing
file
files
text
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811468336.5A
Other languages
Chinese (zh)
Other versions
CN111259181A (en
Inventor
陈琳洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianshang Xinchang Network Technology Co Ltd
Original Assignee
Lianshang Xinchang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianshang Xinchang Network Technology Co Ltd filed Critical Lianshang Xinchang Network Technology Co Ltd
Priority to CN201811468336.5A priority Critical patent/CN111259181B/en
Publication of CN111259181A publication Critical patent/CN111259181A/en
Application granted granted Critical
Publication of CN111259181B publication Critical patent/CN111259181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a method and equipment for displaying information and providing information. One embodiment of the method comprises the following steps: responding to the received cartoon content acquisition request, and displaying corresponding cartoon content; and in response to detecting that the user executes the preset operation, displaying identifiers of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of texts in cartoon contents, wherein the dubbing users of the dubbing files of different groups are different, the identifiers of each group of dubbing files comprise group identifiers corresponding to all the dubbing files of the group or at least one file identifier respectively corresponding to at least one dubbing file in the group of dubbing files, and the identifier of each dubbing file is displayed in association with the text corresponding to the dubbing file. The embodiment can present the user with high quality personalized dubbing content originating from different dubbing users.

Description

Method and device for displaying information and providing information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and equipment for displaying information and providing information.
Background
With the rapid development of mobile internet technology, various cartoon-like applications are endless. Sound cartoon exists in the prior art. A sound cartoon will typically be associated with an audio that is pre-read and recorded by a human; after the user who browses the cartoon starts the sound cartoon, the user directly starts and plays the prerecorded audio.
Disclosure of Invention
The embodiment of the application provides a method and equipment for displaying information and providing information.
In a first aspect, an embodiment of the present application provides a method for displaying information, including: responding to the received cartoon content acquisition request, and displaying corresponding cartoon content; and in response to detecting that the user executes the preset operation, displaying identifiers of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of texts in cartoon contents, wherein the dubbing users of the dubbing files of different groups are different, the identifiers of each group of dubbing files comprise group identifiers corresponding to all the dubbing files of the group or at least one file identifier respectively corresponding to at least one dubbing file in the group of dubbing files, and the identifier of each dubbing file is displayed in association with the text corresponding to the dubbing file.
In a second aspect, embodiments of the present application provide a method for providing information, including: responding to a file identification information acquisition request from a terminal device, and feeding back identification information of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of texts in cartoon contents requested by the file identification information acquisition request to the terminal device; or in response to receiving the file identification information acquisition request and the evaluation information acquisition request from the terminal device, feeding back identification information and evaluation information of at least one group of dubbing files of text in the cartoon content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a service device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a method as described in any implementation of the second aspect.
In a fifth aspect, embodiments of the present application provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method as described in any of the implementations of the second aspect.
The method and the device for displaying information and providing information provided by the embodiment of the application display corresponding cartoon content by responding to the received cartoon content acquisition request; and then, in response to detecting that the user executes the preset operation, displaying the identifiers of one or more groups of dubbing files with evaluation information meeting preset conditions in at least one group of dubbing files of the text in the cartoon content, wherein the dubbing users of different groups of dubbing files are different, the identifier of each group of dubbing files comprises group identifiers corresponding to all the dubbing files of the group or at least one file identifier respectively corresponding to at least one dubbing file in the group, and the identifier of each dubbing file is displayed in association with the text corresponding to the dubbing file, so that the user can be displayed with high-quality personalized dubbing content derived from different dubbing users.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for presenting information according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for presenting information according to the present application;
FIG. 4 is a flow chart of yet another embodiment of a method for presenting information according to the present application;
FIG. 5 is a flow chart of one embodiment of a method for providing information according to the present application;
FIG. 6 is a schematic diagram of a computer system suitable for use in implementing some embodiments of the electronic device of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the present application for exposing information, providing information, and methods for providing information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 1011, 1012, 1013, a network 102, and a service device 103. The network 102 serves as a medium for providing communication links between the terminal devices 1011, 1012, 1013 and the service device 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the service device 103 via the network 102 using the terminal devices 1011, 1012, 1013 to send or receive messages or the like, for example, to obtain identification information of the requested one or more sets of dubbing files from the service device 103, or to obtain identification information and evaluation information of the requested at least one set of dubbing files from the service device 103, etc.
The terminal devices 1011, 1012, 1013 may display the corresponding cartoon content in response to receiving the cartoon content acquisition request; and then, if the user is detected to execute the preset operation, displaying the identification of one or more groups of dubbing files, of which the evaluation information meets the preset conditions, in at least one group of dubbing files of the text in the cartoon content.
The terminal devices 1011, 1012, 1013 may be hardware or software. When the terminal devices 1011, 1012, 1013 are hardware, they may be various electronic devices having a display screen and supporting information interaction, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal devices 1011, 1012, 1013 are software, they can be installed in the above-listed electronic devices. Which may be implemented as a plurality of software or software modules, or as a single software or software module. The present invention is not particularly limited herein.
The service device 103 may be a server providing various services. For example, the identification information of one or more sets of dubbing files whose evaluation information satisfies a preset condition may be fed back to the terminal devices 1011, 1012, 1013 upon receiving a file identification information acquisition request from the terminal devices 1011, 1012, 1013, or the identification information of at least one set of dubbing files whose evaluation information satisfies a preset condition and the evaluation information of the text in the comic contents requested by the file identification information acquisition request may be fed back to the terminal devices 1011, 1012, 1013 upon receiving the file identification information acquisition request and the evaluation information acquisition request from the terminal devices 1011, 1012, 1013.
The service device 103 may be hardware or software. When the service device 103 is hardware, it may be implemented as a distributed server cluster formed of a plurality of servers, or may be implemented as a single server. When the service device 103 is software, it may be implemented as a plurality of software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that the method for presenting information provided by the embodiments of the present application may be performed by the terminal devices 1011, 1012, 1013, and the method for providing information may be performed by the service device 103.
It should be understood that the number of terminal devices, networks and service devices in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and service devices, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for presenting information according to the present application is shown. The method for presenting information is generally applied to terminal devices. The method for displaying information comprises the following steps:
in step 201, in response to receiving the cartoon content acquisition request, corresponding cartoon content is displayed.
In the present embodiment, an execution subject of the method for presenting information (e.g., the terminal device shown in fig. 1) may determine whether a comic content acquisition request is received. Here, the comic content acquisition request may be a request generated by a user by searching for a comic keyword in a comic search box, and in general, the comic keyword may include, but is not limited to, a comic name, a comic chapter, a comic author, a comic style, and the like. The comic content acquisition request may be a request generated by a user by clicking a comic link such as a comic title, a comic image, or the like for linking to the comic content. The comic content acquisition request may also be a request generated by the user through other means of acquiring the comic content, for example, a request generated by the user clicking on a link to "watch a comic done task".
In this embodiment, if a request for obtaining the cartoon content is received, the executing body may display the corresponding cartoon content. Since the cartoon keyword (including but not limited to a cartoon name, a cartoon chapter, a cartoon author, a cartoon style, etc.) may be included in the cartoon content acquisition request, the corresponding cartoon content may be found and displayed through the cartoon keyword. For example, if the cartoon content acquisition request includes a cartoon name and a cartoon chapter, the execution subject may display the cartoon content corresponding to the cartoon chapter in the cartoon name.
Step 202, in response to detecting that the user performs a preset operation, displaying the identifications of one or more groups of dubbing files with the evaluation information meeting the preset conditions in at least one group of dubbing files of the text in the cartoon content.
In this embodiment, the execution body may detect whether the user performs a preset operation. The preset operation performed by the user may be an operation performed by the user on a virtual button presented in the screen (e.g., a button for playing a dubbing file), such as a click operation, a drag operation, or the like. The preset operation performed by the user may also be an operation performed by the user on a preset area (e.g., a certain text area) in the displayed comic content, for example, a long press operation.
In this embodiment, if it is detected that the user performs a preset operation, for example, a click operation of a virtual button presented in a screen by the user is detected or a long-press operation of a text in a displayed cartoon content by the user is detected, the executing body may display an identifier of one or more groups of dubbing files whose evaluation information satisfies a preset condition in at least one group of dubbing files corresponding to the text in the cartoon content.
In this embodiment, the evaluation information may be acquired from the service device. The service equipment can determine evaluation information of the dubbing file according to the evaluation of other users on the dubbing file, the definition of the voice in the dubbing file, the matching degree between the dubbing file and the corresponding dubbing text and other factors. The evaluation information can be characterized as a score corresponding to the dubbing file; text information that evaluates the dubbing file, e.g., good, general, bad, etc., may also be characterized. If the assessment information is characterized by a score, one or more sets of soundtrack files having a score greater than a preset score threshold (e.g., 80 points) may be selected from the at least one set of soundtrack files. If the evaluation information is represented by text information, one or more sets of dubbing files with text information being preset text (for example, good or good) can be selected from the at least one set of dubbing files.
Here, dubbing users of different groups of dubbing files are different, and dubbing users of the same group of dubbing files are typically the same dubbing user. The identification of each group of dubbing files may include a group identification corresponding to all the dubbing files of the group or at least one file identification corresponding to at least one dubbing file of the group, respectively. It should be noted that, the group identifier may be an identifier of a dubbing user of the group of dubbing files, an identifier set by the dubbing user for the group of dubbing files, or an identifier set by default. As an example, the identification of a group of dubbing files may identify "dubbing person light rain" for the group, and the identification of the group of dubbing files may also be characterized by three file identifications "light rain-1", "light rain-2", and "light rain-3" corresponding to the three dubbing files included in the group of dubbing files, respectively.
Here, the identity of each dubbing file may be displayed in association with the text to which the dubbing file corresponds. For example, the identifier of the dubbing file may be displayed around the text corresponding to the dubbing file, and the identifier of the dubbing file may also be displayed in such a manner that other users can understand that the identifier corresponds to the text.
It should be noted that the text in the comic content may include, but is not limited to: dialogue text between comic characters, bypass text and annotation text in the comic content. Dialog text between cartoon characters may typically be displayed in a dialog box. The bypass text in the caricature content may include text describing the mental activities of the caricature person. The annotation text in the caricature content may include interpreted text of nouns appearing in the caricature dialog.
It should be noted that, if a long-press operation of a user on a certain text area in the displayed cartoon content is detected, the executing body may display the identifier of one or more groups of dubbing files corresponding to the text in the text area; if the long-press operation of the user on the blank area in the displayed cartoon content is detected, the execution main body can display the identification of one or more groups of dubbing files corresponding to each text in all the texts in the current display interface.
In some optional implementations of this embodiment, the text may include at least one text segment. In some cases, at least one of at least one dialog text, at least one bypass text, and at least one comment text may be included in the caricature content. Each dialog text, each side text, or each comment text may be referred to as a text segment. The execution body may detect whether the user performs a dubbing operation for dubbing the text segment. The dubbing operation may be an operation in which a user selects a text segment to be dubbed, and then clicks on the "dubbing" icon. If the dubbing operation performed by the user for dubbing the text segment is detected, the execution subject may receive the voice of the user to generate a dubbing file of the user for the dubbed text segment. And the execution body can send the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file to the service equipment in a correlated mode, so that the service equipment evaluates the dubbing file of the user. The identification information of the text segments has uniqueness and is used for searching the dubbed text segments from a plurality of text segments. The service device is typically an electronic device that stores text clips and corresponding dubbing files, and evaluates the dubbing files. The service device can generally determine evaluation information of the dubbing file according to other users' evaluation on the dubbing file, definition of voice in the dubbing file, matching degree between the dubbing file and corresponding dubbing text and other factors.
In some optional implementations of this embodiment, a preset showing icon (e.g., a "more" icon, a "…" icon, etc.) may be presented in the display interface of the cartoon content, where the showing icon may be used to show the identifiers of the other dubbing files in addition to the identifier of the displayed dubbing file. In some cases, if the identifier of the dubbing file interested by the user does not exist in the identifiers of the displayed dubbing files, the user can acquire the identifiers of more dubbing files through the display icon. The execution body may detect whether the user triggers the display icon, for example, detect whether the user performs a preset operation such as clicking, pulling down, etc. on the display icon. If the user is detected to trigger the display icon, the execution body may display the identifiers of the dubbing files other than the identifier of the displayed dubbing file.
In some optional implementations of this embodiment, the executing body may detect whether the user performs a preset operation on the identifier of the dubbing file, for example, performing a long press operation on the identifier of the dubbing file, or performing a click operation on the presented "evaluation" icon after performing the long press operation on the identifier of the dubbing file. If it is detected that the user performs the preset operation on the identifier of the dubbing file, the execution body may receive an evaluation of the user on the dubbing file targeted by the preset operation.
In some optional implementations of this embodiment, the executing body may, in response to detecting that the user performs the preset operation, display an identification of one or more sets of dubbing files whose evaluation information satisfies the preset condition in at least one set of dubbing files of the text in the comic content in a manner that: the above-described execution body may detect whether a user performs a preset operation, for example, a click operation performed on a virtual button presented in a screen, or a long-press operation on the displayed comic content. If the user is detected to execute the preset operation, the executing body may send a file identification information obtaining request to the service device, where the file identification information obtaining request may be used to obtain identification information of one or more groups of dubbing files whose evaluation information satisfies a preset condition in at least one group of dubbing files of a text in the displayed cartoon content, the identification information has uniqueness, and the terminal device and the service device may search for the dubbing files by using the identification information of the dubbing files. And then, the executing body can receive the identification information of one or more groups of dubbing files, of which the evaluation information meets the preset condition, in at least one group of dubbing files of the text in the cartoon content fed back by the service equipment. Finally, the executing body may display the identification of the one or more groups of dubbing files based on the identification information. The executing body may search the identifiers of dubbing files corresponding to the identification information, and display the identifiers of one or more groups of the searched dubbing files.
In some optional implementations of this embodiment, the executing body may, in response to detecting that the user performs the preset operation, display an identification of one or more sets of dubbing files whose evaluation information satisfies the preset condition in at least one set of dubbing files of the text in the comic content in a manner that: the above-described execution body may detect whether a user performs a preset operation, for example, a click operation performed on a virtual button presented in a screen, or a long-press operation on the displayed comic content. If it is detected that the user performs a preset operation, the execution subject may send a file identification information acquisition request and an evaluation information acquisition request to the service device. The file identification information obtaining request can be used for obtaining identification information of one or more groups of dubbing files, of which the evaluation information meets preset conditions, in at least one group of dubbing files of the text in the displayed cartoon content, the identification information has uniqueness, and the terminal equipment and the service equipment can search the dubbing files by utilizing the identification information of the dubbing files. The evaluation information acquisition request may be used to acquire evaluation information corresponding to the dubbing file. Then, the executing body may receive the identification information and the evaluation information of at least one group of dubbing files of the text in the cartoon content fed back by the service device. And then, the execution body can select one or more groups of dubbing files with evaluation information meeting preset conditions from at least one group of dubbing files. If the evaluation information is represented by a score, the executing entity may select one or more sets of dubbing files having a score greater than a predetermined score threshold (e.g., 80 scores) from the at least one set of dubbing files. If the evaluation information is represented by text information, the executing body may select one or more sets of dubbing files with text information being preset text (e.g., good) from the at least one set of dubbing files. Finally, the executing body may display the identification of the one or more dubbing files based on the identification information of the one or more dubbing files. The executing body may search for the identifiers of the one or more dubbing files corresponding to the identification information of the one or more dubbing files, and display the identifiers of the one or more searched dubbing files.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for presenting information according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 may determine whether a comic content acquisition request is received, where the terminal device 301 detects a click operation of the user on the comic title "comic M3"302, and may display the comic content in the comic "comic M3" as shown by the icon 303. Thereafter, the terminal device 301 may determine whether the user performs a preset operation, where if the terminal device 301 detects a long press operation performed by the user on a blank area in the displayed cartoon content 303, the identifier of three groups of soundtrack files with scores greater than eight ten in at least one group of soundtrack files of the text such as "dialog content 1", "bye content 1" and "dialog content 2" in the cartoon content 303 may be displayed, as shown by the icon 304.
The method provided by the embodiment of the application can display the personalized dubbing content which is high-quality and is derived from different dubbing users to the users by displaying the identifiers of one or more groups of dubbing files, of which the evaluation information meets the preset conditions, in at least one group of dubbing files of the text in the cartoon content.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for presenting information is shown. The method for presenting information is generally applied to terminal devices. The process 400 of the method for presenting information includes the steps of:
in step 401, in response to receiving the cartoon content acquisition request, corresponding cartoon content is displayed.
In response to detecting that the user performs the preset operation, the step 402 displays the identities of one or more groups of dubbing files whose evaluation information satisfies the preset condition in at least one group of dubbing files of the text in the cartoon content.
In this embodiment, the operations of step 401 to step 402 are substantially the same as those of step 201 to step 202, and will not be described herein.
In response to detecting that the user triggered the presented identifier, a group or a dubbing file is played based on the triggered identifier, step 403.
In this embodiment, the executing body may detect whether the user triggers the presented identifier. The operation of triggering the displayed identifier by the user can be a clicking operation on the displayed identifier, or a clicking operation on the displayed playing icon after clicking the displayed identifier.
In this embodiment, if the user trigger is detected, the executing body may play a group or a dubbing file based on the trigger. Specifically, if the identifier triggered by the user is a group identifier, the executing body may play a dubbing file corresponding to the dialogue text in a group of dubbing files corresponding to the triggered group identifier; if the identifier triggered by the user is a file identifier, the executing main body can play the dubbing file corresponding to the triggered file identifier.
In some optional implementations of this embodiment, the text may include a plurality of text segments. In some cases, multiple dialog texts, multiple bypass texts, and multiple comment texts may be included in the caricature content. Each dialog text, each side text, or each comment text may be referred to as a text segment. If the identifier triggered by the user is a group identifier, the executing body may play a group or a dubbing file based on the triggered identifier in the following manner: the executing body may determine a start dubbing file from a set of dubbing files corresponding to the triggered group identifier, and then may sequentially play the dubbing files in the same group as the start dubbing file from the start dubbing file. It should be noted that, the sequence of a group of dubbing files is generally determined based on the display sequence of text segments corresponding to the dubbing files in the group. If the display sequence of the text segments corresponding to the dubbing file is in front, the sequence of the dubbing file is also in front.
In some optional implementations of this embodiment, the start dubbing file may identify a first dubbing file in the group of dubbing files corresponding to the triggered group (typically, a dubbing file corresponding to a text segment in which the first sequence is displayed). The initial dubbing file may be a dubbing file of a text segment with the highest display order among currently displayed text segments.
In some optional implementations of this embodiment, the text may include a plurality of text segments. In some cases, multiple dialog texts, multiple bypass texts, and multiple comment texts may be included in the caricature content. Each dialog text, each side text, or each comment text may be referred to as a text segment. If the identifier triggered by the user is a file identifier, the executing body may play a group or a dubbing file based on the triggered identifier in the following manner: the execution main body can only play the dubbing file corresponding to the triggered file identifier; or the execution main body may take the dubbing file corresponding to the triggered file identifier as a starting dubbing file, and sequentially play the dubbing files in the same group as the starting dubbing file from the starting dubbing file. It should be noted that, the sequence of a group of dubbing files is generally determined based on the display sequence of text segments corresponding to the dubbing files in the group. If the display sequence of the text segments corresponding to the dubbing file is in front, the sequence of the dubbing file is also in front.
In some optional implementations of this embodiment, the executing body may determine whether a text segment corresponding to the currently played dubbing file is presented in the current display interface, and if it is determined that the text segment corresponding to the currently played dubbing file is not presented in the current display interface, the executing body may scroll the current display interface to the cartoon content including the text segment corresponding to the currently played dubbing file. Here, the text segments in the cartoon content displayed in the current display interface generally need to be scrolled according to the currently played dubbing file, that is, when the user hears the dubbing file, the text segments corresponding to the heard dubbing file generally need to be seen in the current display interface.
As can be seen from fig. 4, compared with the corresponding embodiment of fig. 2, the flow 400 of the method for presenting information in this embodiment represents the step of playing a group or a dubbing file based on the triggered identifier if the user is detected to trigger the presented identifier. Therefore, the scheme described in the embodiment can play the dubbing file of interest to the user.
With further reference to fig. 5, a flow 500 of one embodiment of a method for providing information is shown. The method for providing information is generally applied to a service device. The process 500 of the method for providing information comprises the steps of:
step 501 determines whether a request from a terminal device is received.
In the present embodiment, an execution subject of a method for providing information (e.g., a service device shown in fig. 1) may determine whether a request from a terminal device is received. The request may include a file identification information acquisition request and/or an evaluation information acquisition request. The file identification information obtaining request may be used to obtain identification information of one or more groups of dubbing files whose evaluation information satisfies a preset condition in at least one group of dubbing files of a text in the requested cartoon content, where the file identification information obtaining request generally includes an identification of the text in the requested cartoon content, the identification information has uniqueness, and the terminal device and the service device may search for the dubbing file using the identification information of the dubbing files. The evaluation information acquisition request may be used to acquire evaluation information corresponding to the dubbing file.
Step 502, if the request is a file identification information acquisition request, feeding back identification information of one or more groups of dubbing files, of which evaluation information satisfies a preset condition, in at least one group of dubbing files of a text in the cartoon content requested by the file identification information acquisition request to the terminal device.
In this embodiment, if a request from a terminal device is received in step 501, the executing entity may determine whether the received request is a file identification information acquisition request, and the executing entity may determine whether the request is a file identification information acquisition request through a request identifier. If the request is a file identification information obtaining request, the executing body may feed back identification information of one or more groups of dubbing files whose evaluation information satisfies a preset condition in at least one group of dubbing files of a text in the cartoon content requested by the file identification information obtaining request to the terminal device.
In this embodiment, the executing body may determine the evaluation information of the dubbing file according to factors such as the evaluation of other users on the dubbing file, the clarity of the voice in the dubbing file, and the matching degree between the dubbing file and the corresponding dubbing text. The evaluation information can be characterized as a score corresponding to the dubbing file; text information that evaluates the dubbing file, e.g., good, general, bad, etc., may also be characterized.
In some optional implementations of this embodiment, the executing body may feed back, to the terminal device, identification information of one or more groups of dubbing files whose evaluation information satisfies a preset condition in at least one group of dubbing files of a text in the cartoon content requested by the file identification information acquisition request in the following manner: the executing body may determine at least one set of dubbing files of text in the comic content requested by the file identification information acquisition request, and since the file identification information acquisition request generally includes an identification of text in the requested comic content, the executing body may determine at least one set of dubbing files corresponding to the identification of the requested text. The executing body may then obtain the evaluation information of the at least one set of dubbing files, and select one or more sets of dubbing files from the at least one set of dubbing files based on the evaluation information. Specifically, if the evaluation information is represented by a score, the executing entity may select one or more groups of dubbing files having a score greater than a preset score threshold (e.g., 80 score) from the at least one group of dubbing files. If the evaluation information is represented by text information, the executing body may select one or more sets of dubbing files with text information being preset text (e.g., good) from the at least one set of dubbing files. And then, the identification information of the one or more groups of dubbing files can be fed back to the terminal equipment.
In step 503, if the request is a file identification information acquisition request and an evaluation information acquisition request, the identification information and the evaluation information of at least one group of dubbing files of the text in the cartoon content requested by the file identification information acquisition request and the evaluation information acquisition request are fed back to the terminal device.
In this embodiment, if a request from a terminal device is received in step 501, the executing entity may determine whether the received request is a file identification information acquisition request and an evaluation information acquisition request, and the executing entity may determine whether the request is a file identification information acquisition request and an evaluation information acquisition request through a request identifier. If the request is a file identification information acquisition request and an evaluation information acquisition request, the execution body may feed back identification information and evaluation information of at least one group of dubbing files of a text in the cartoon content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device.
In this embodiment, the executing body may determine the evaluation information of the dubbing file according to factors such as the evaluation of other users on the dubbing file, the clarity of the voice in the dubbing file, and the matching degree between the dubbing file and the corresponding dubbing text. The evaluation information can be characterized as a score corresponding to the dubbing file; text information that evaluates the dubbing file, e.g., good, general, bad, etc., may also be characterized. If the evaluation information is represented by a score, after the terminal device receives the identification information and the evaluation information fed back by the execution body, the terminal device may select one or more groups of dubbing files with scores greater than a preset score threshold (for example, 80 scores) from at least one group of dubbing files indicated by the identification information. If the evaluation information is represented by text information, after the terminal device receives the identification information and the evaluation information fed back by the execution body, the terminal device may select one or more sets of dubbing files with text information being preset text (e.g., good and good) from the at least one set of dubbing files.
In some optional implementations of this embodiment, the executing body may receive a dubbing file of a user and identification information of a text segment corresponding to the dubbing file sent by the terminal device; then, the dubbing file of the user may be stored in association with the identification information of the text segment corresponding to the dubbing file. The terminal device may detect whether a user performs a dubbing operation for dubbing the text segment. The dubbing operation may be an operation in which a user selects a text segment to be dubbed, and then clicks on the "dubbing" icon. If the dubbing operation performed by the user and used for dubbing the text segment is detected, the terminal device may receive the voice of the user to generate a dubbing file of the user for the dubbed text segment. And the terminal equipment can send the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file to the execution main body in a correlated mode so that the execution main body stores the dubbing file of the user.
In some optional implementations of this embodiment, the executing body may evaluate the dubbing file of the user to obtain evaluation information. As an example, the execution subject may evaluate the dubbing file by other users with respect to the evaluation of the dubbing file of the user, the definition of the dubbing file of the user, and the like.
In some optional implementations of this embodiment, the executing entity may evaluate the dubbing file of the user as follows: the execution body may identify dubbing text from the dubbing file of the user, and may convert the dubbing file into the dubbing text by a voice-to-text (speech-to-text) technique. Speech-to-text conversion is a speech recognition program that can convert spoken language into written language. Then, the matching degree between the dubbing file and the dubbed text can be determined as a first matching degree, and the execution subject can determine the matching degree between the dubbing file and the dubbing text through the existing text similarity calculation method (for example, a cosine similarity calculation method, an editing distance calculation method and the like); finally, the dubbing file of the user may be evaluated based on the first matching degree. As an example, if the evaluation information is represented by a score, the product of the first matching degree and a preset numerical value may be used as the evaluation information of the dubbing file of the user; if the evaluation information is represented by text information, the evaluation information corresponding to the first matching degree can be determined by using a preset corresponding relation table of the first matching degree and the evaluation information.
In some alternative implementations of this embodiment, the text may correspond to one or more speech features, which are typically identified as features in the audio signal. Generally speaking, the speech characteristics of different people are not identical. The execution body may evaluate the dubbing file of the user as follows: the execution body may extract one or more voice features of the user from the dubbing file of the user, and the execution body may extract the one or more voice features of the user from the dubbing file of the user using an existing voice feature extraction method (e.g., mel-frequency cepstral coefficient (Mel-Frequency Cepstral Coefficients, MFCC)). Thereafter, the execution body may determine a degree of matching between one or more voice features of the user and one or more voice features corresponding to the dubbed text as the second degree of matching, for example, the execution body may determine a ratio between the same voice feature and the number of voice features of the user as the second degree of matching. Finally, the dubbing file of the user may be evaluated based on the second matching degree. As an example, if the evaluation information is represented by a score, the product of the second matching degree and a preset numerical value may be used as the evaluation information of the dubbing file of the user; if the evaluation information is represented by text information, the evaluation information corresponding to the second matching degree can be determined by using a preset corresponding relation table of the second matching degree and the evaluation information.
The method for providing information provided in the above embodiment of the present application feeds back, to a terminal device, identification information of one or more sets of dubbing files whose evaluation information satisfies a preset condition, or feeds back, to the terminal device, identification information and evaluation information of at least one set of dubbing files, so that the terminal device displays high-quality dubbing content, or enables the terminal device to select high-quality dubbing content from the dubbing files based on the evaluation information for display.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use in implementing an electronic device of an embodiment of the present invention. The electronic device shown in fig. 6 is only an example and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Liquid Crystal Display (LCD), a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may exist alone without being assembled into the terminal device or the network device. The computer readable medium carries one or more programs which, when executed by the terminal device or network device, cause the terminal device to: responding to the received cartoon content acquisition request, and displaying corresponding cartoon content; and in response to detecting that the user executes the preset operation, displaying identifiers of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of texts in cartoon contents, wherein the dubbing users of the dubbing files of different groups are different, the identifiers of each group of dubbing files comprise group identifiers corresponding to all the dubbing files of the group or at least one file identifier respectively corresponding to at least one dubbing file in the group of dubbing files, and the identifier of each dubbing file is displayed in association with the text corresponding to the dubbing file. Or cause the service device to: responding to a file identification information acquisition request from a terminal device, and feeding back identification information of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of texts in cartoon contents requested by the file identification information acquisition request to the terminal device; or in response to receiving the file identification information acquisition request and the evaluation information acquisition request from the terminal device, feeding back identification information and evaluation information of at least one group of dubbing files of text in the cartoon content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device.
The above description is only illustrative of the preferred embodiments of the present invention and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the invention referred to in the present invention is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.

Claims (17)

1. A method for displaying information, applied to a terminal device, the method comprising:
responding to the received cartoon content acquisition request, and displaying corresponding cartoon content;
in response to detecting that a user executes a preset operation, displaying identifiers of one or more groups of dubbing files, wherein evaluation information of the one or more groups of dubbing files in at least one group of the text in the cartoon content meets preset conditions, dubbing users of different groups of the dubbing files are different, each group of the identifiers of the dubbing files comprises group identifiers of all the dubbing files of the group or at least one file identifier respectively corresponding to at least one dubbing file in the group, the identifier of each dubbing file is displayed in association with a text corresponding to the dubbing file, evaluation information is determined according to evaluation of other users on the dubbing files, the definition of voice in the dubbing file and the matching degree between the dubbing file and the corresponding dubbing text, and a preset display icon is displayed in a display interface of the cartoon content and is used for displaying identifiers of other dubbing files except the identifiers of the displayed dubbing files;
Responsive to detecting that the user triggered the display icon, displaying an identification of other dubbing files other than the identification of the displayed dubbing file;
responsive to detecting the user triggering the presented identifier, playing a set or one dubbing file based on the triggered identifier;
responding to the fact that a text segment corresponding to a currently played dubbing file is not presented in a current display interface, and scrolling the current display interface to cartoon content containing the text segment corresponding to the currently played dubbing file;
and responding to the detection that the user executes the preset operation for the identifier of the dubbing file, and receiving the evaluation of the user for the dubbing file aimed at by the preset operation.
2. The method of claim 1, wherein the text comprises at least one text segment, the method further comprising:
receiving a voice of the user to generate a dubbing file of the user for the text segment in response to detecting a dubbing operation performed by the user for dubbing the text segment;
and sending the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file to service equipment so that the service equipment evaluates the dubbing file of the user.
3. The method of claim 1, wherein the text comprises a plurality of text segments, the triggered identification being the group identification; and
the playing a group or a dubbing file based on the triggered identifier comprises:
and determining a starting dubbing file from a group of dubbing files corresponding to the triggered group identifier, and starting from the starting dubbing file, sequentially playing the dubbing files in the same group as the starting dubbing file, wherein the sequence of the group of dubbing files is determined based on the display sequence of text fragments corresponding to the dubbing files in the group.
4. A method according to claim 3, wherein the starting dubbing file identifies a first ordered dubbing file in a corresponding set of dubbing files for the triggered group, or a top-displayed text segment of a currently displayed text segment.
5. The method of claim 1, wherein the text comprises a plurality of text segments, the triggered identification being the file identification; and
the playing a group or a dubbing file based on the triggered identifier comprises:
and taking the dubbing file corresponding to the triggered file identifier as a starting dubbing file, and starting from the starting dubbing file, sequentially playing the dubbing files in the same group as the starting dubbing file, wherein the sequence of one group of dubbing files is determined based on the display sequence of text fragments corresponding to the dubbing files in the group.
6. The method of claim 1, wherein the presenting, in response to detecting the user performing the preset operation, an identification of one or more sets of dubbing files for which evaluation information satisfies a preset condition from at least one set of dubbing files for text in the caricature content comprises:
in response to detecting that a user performs a preset operation, sending a file identification information acquisition request to a service device;
receiving identification information of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of texts in the cartoon content fed back by the service equipment;
and displaying the identification of the one or more groups of dubbing files based on the identification information.
7. The method of claim 1, wherein the presenting, in response to detecting the user performing the preset operation, an identification of one or more sets of dubbing files for which evaluation information satisfies a preset condition from at least one set of dubbing files for text in the caricature content comprises:
in response to detecting that a user performs a preset operation, sending a file identification information acquisition request and an evaluation information acquisition request to a service device;
receiving identification information and evaluation information of at least one group of dubbing files of texts in the cartoon content fed back by the service equipment;
Selecting one or more groups of dubbing files with evaluation information meeting preset conditions from at least one group of dubbing files;
and displaying the identification of the one or more groups of dubbing files based on the identification information of the one or more groups of dubbing files.
8. A method for providing information, applied to a service device, the method comprising:
in response to receiving a file identification information acquisition request from a terminal device, feeding back identification information of one or more groups of dubbing files, of which evaluation information meets preset conditions, in at least one group of dubbing files of text in cartoon content requested by the file identification information acquisition request to the terminal device, wherein identifiers of all the dubbing files of different groups are different from each other in dubbing users, each group of the dubbing files comprises group identifiers corresponding to all the dubbing files of the group or at least one file identifier corresponding to at least one dubbing file in the group respectively, the identifiers of each dubbing file are displayed in association with text corresponding to the dubbing file, a preset display icon is displayed in a display interface of the cartoon content, the display icon is used for displaying identifiers of other dubbing files except for the identifiers of the displayed dubbing files, and in response to detecting that a user triggers the display icon, the identifiers of other dubbing files except for the identifiers of the displayed dubbing files are displayed, and the evaluation information is determined according to the identifiers of the other users and the identifiers of the dubbing files, and the identifiers of the dubbing files are clearly matched with the corresponding text; or alternatively
In response to receiving a file identification information acquisition request and an evaluation information acquisition request from a terminal device, feeding back identification information and evaluation information of at least one group of dubbing files of a text in cartoon content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device;
when the user is detected to trigger the identifier of one or more groups of displayed dubbing files, the terminal equipment plays one group or one dubbing file based on the triggered identifier, and the terminal equipment scrolls the current display interface to cartoon content containing text fragments corresponding to the currently played dubbing file after determining that the text fragments corresponding to the currently played dubbing file are not presented in the current display interface;
and when the user is detected to execute the preset operation on the identifier of the dubbing file, the terminal equipment acquires the evaluation of the user on the dubbing file aimed at by the preset operation.
9. The method according to claim 8, wherein feeding back identification information of one or more sets of dubbing files whose evaluation information satisfies a preset condition from among at least one set of dubbing files of text in the comic content requested by the file identification information acquisition request to the terminal device includes:
Determining at least one group of dubbing files of text in the cartoon content requested by the file identification information acquisition request;
acquiring evaluation information of the at least one set of dubbing files, and selecting one or more sets of dubbing files from the at least one set of dubbing files based on the evaluation information;
and feeding back the identification information of the one or more groups of dubbing files to the terminal equipment.
10. The method of claim 8, wherein the method further comprises:
receiving a dubbing file of a user and identification information of a text segment corresponding to the dubbing file, wherein the identification information is sent by the terminal equipment;
and storing the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file in a correlated mode.
11. The method of claim 10, wherein the method further comprises:
and evaluating the dubbing file of the user to obtain evaluation information.
12. The method of claim 11, wherein the evaluating the user's dubbing file comprises:
identifying dubbing characters from the dubbing files of the users;
determining the matching degree between the dubbing file and the dubbed text as a first matching degree;
And evaluating the dubbing file of the user based on the first matching degree.
13. The method of claim 11, wherein the text corresponds to one or more speech features; and
the evaluating the dubbing file of the user comprises the following steps:
extracting one or more voice features of the user from the dubbing file of the user;
determining the matching degree between one or more voice features of the user and one or more voice features corresponding to the dubbed text as a second matching degree;
and evaluating the dubbing file of the user based on the second matching degree.
14. A terminal device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
15. A service device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 8-13.
16. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-7.
17. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 8-13.
CN201811468336.5A 2018-12-03 2018-12-03 Method and device for displaying information and providing information Active CN111259181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811468336.5A CN111259181B (en) 2018-12-03 2018-12-03 Method and device for displaying information and providing information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811468336.5A CN111259181B (en) 2018-12-03 2018-12-03 Method and device for displaying information and providing information

Publications (2)

Publication Number Publication Date
CN111259181A CN111259181A (en) 2020-06-09
CN111259181B true CN111259181B (en) 2024-04-12

Family

ID=70953747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811468336.5A Active CN111259181B (en) 2018-12-03 2018-12-03 Method and device for displaying information and providing information

Country Status (1)

Country Link
CN (1) CN111259181B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282784A (en) * 2021-06-03 2021-08-20 北京得间科技有限公司 Audio recommendation method, computing device and computer storage medium for dialog novel

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103117057A (en) * 2012-12-27 2013-05-22 安徽科大讯飞信息科技股份有限公司 Application method of special human voice synthesis technique in mobile phone cartoon dubbing
CN103186578A (en) * 2011-12-29 2013-07-03 方正国际软件(北京)有限公司 Processing system and processing method for sound effects of cartoon
CN105302908A (en) * 2015-11-02 2016-02-03 北京奇虎科技有限公司 E-book related audio resource recommendation method and apparatus
CN106531148A (en) * 2016-10-24 2017-03-22 咪咕数字传媒有限公司 Cartoon dubbing method and apparatus based on voice synthesis
CN106971415A (en) * 2017-03-29 2017-07-21 广州阿里巴巴文学信息技术有限公司 Multimedia caricature player method, device and terminal device
CN107040452A (en) * 2017-02-08 2017-08-11 浙江翼信科技有限公司 A kind of information processing method, device and computer-readable recording medium
CN107885855A (en) * 2017-11-15 2018-04-06 福州掌易通信息技术有限公司 Dynamic caricature generation method and system based on intelligent terminal
CN107967104A (en) * 2017-12-20 2018-04-27 北京时代脉搏信息技术有限公司 The method and electronic equipment of voice remark are carried out to information entity
JP2018169691A (en) * 2017-03-29 2018-11-01 富士通株式会社 Reproduction control device, cartoon data provision program, voice reproduction program, reproduction control program, and reproduction control method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120196260A1 (en) * 2011-02-01 2012-08-02 Kao Nhiayi Electronic Comic (E-Comic) Metadata Processing
US20160378738A1 (en) * 2015-06-29 2016-12-29 International Business Machines Corporation Editing one or more text files from an editing session for an associated text file
KR20180105810A (en) * 2017-03-16 2018-10-01 네이버 주식회사 Method and system for generating content using audio comment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103186578A (en) * 2011-12-29 2013-07-03 方正国际软件(北京)有限公司 Processing system and processing method for sound effects of cartoon
CN103117057A (en) * 2012-12-27 2013-05-22 安徽科大讯飞信息科技股份有限公司 Application method of special human voice synthesis technique in mobile phone cartoon dubbing
CN105302908A (en) * 2015-11-02 2016-02-03 北京奇虎科技有限公司 E-book related audio resource recommendation method and apparatus
CN106531148A (en) * 2016-10-24 2017-03-22 咪咕数字传媒有限公司 Cartoon dubbing method and apparatus based on voice synthesis
CN107040452A (en) * 2017-02-08 2017-08-11 浙江翼信科技有限公司 A kind of information processing method, device and computer-readable recording medium
CN106971415A (en) * 2017-03-29 2017-07-21 广州阿里巴巴文学信息技术有限公司 Multimedia caricature player method, device and terminal device
JP2018169691A (en) * 2017-03-29 2018-11-01 富士通株式会社 Reproduction control device, cartoon data provision program, voice reproduction program, reproduction control program, and reproduction control method
CN107885855A (en) * 2017-11-15 2018-04-06 福州掌易通信息技术有限公司 Dynamic caricature generation method and system based on intelligent terminal
CN107967104A (en) * 2017-12-20 2018-04-27 北京时代脉搏信息技术有限公司 The method and electronic equipment of voice remark are carried out to information entity

Also Published As

Publication number Publication date
CN111259181A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US10417344B2 (en) Exemplar-based natural language processing
CN107871500B (en) Method and device for playing multimedia
CN107918653B (en) Intelligent playing method and device based on preference feedback
US8862615B1 (en) Systems and methods for providing information discovery and retrieval
US9472209B2 (en) Deep tagging background noises
US20090327272A1 (en) Method and System for Searching Multiple Data Types
CN110267113B (en) Video file processing method, system, medium, and electronic device
CN109582825B (en) Method and apparatus for generating information
US11750898B2 (en) Method for generating target video, apparatus, server, and medium
CN113596579B (en) Video generation method, device, medium and electronic equipment
WO2014154097A1 (en) Automatic page content reading-aloud method and device thereof
CN114023301A (en) Audio editing method, electronic device and storage medium
CN107680584B (en) Method and device for segmenting audio
CN112765460A (en) Conference information query method, device, storage medium, terminal device and server
CN110379406B (en) Voice comment conversion method, system, medium and electronic device
CN111723235B (en) Music content identification method, device and equipment
CN112182255A (en) Method and apparatus for storing media files and for retrieving media files
CN113407775B (en) Video searching method and device and electronic equipment
CN111259181B (en) Method and device for displaying information and providing information
CN113011169A (en) Conference summary processing method, device, equipment and medium
CN111767259A (en) Content sharing method and device, readable medium and electronic equipment
US20140297285A1 (en) Automatic page content reading-aloud method and device thereof
CN112802454B (en) Method and device for recommending awakening words, terminal equipment and storage medium
US11775070B2 (en) Vibration control method and system for computer device
CN113923479A (en) Audio and video editing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant