CN107517323A - A kind of information sharing method, device and storage medium - Google Patents
A kind of information sharing method, device and storage medium Download PDFInfo
- Publication number
- CN107517323A CN107517323A CN201710806628.4A CN201710806628A CN107517323A CN 107517323 A CN107517323 A CN 107517323A CN 201710806628 A CN201710806628 A CN 201710806628A CN 107517323 A CN107517323 A CN 107517323A
- Authority
- CN
- China
- Prior art keywords
- information
- text information
- video
- electronic book
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 94
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 94
- 230000002194 synthesizing effect Effects 0.000 claims description 26
- 239000003086 colorant Substances 0.000 claims description 6
- 238000013459 approach Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 12
- 241000282806 Rhinoceros Species 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 208000029257 vision disease Diseases 0.000 description 2
- 229920001342 Bakelite® Polymers 0.000 description 1
- 239000004637 bakelite Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000008833 migu Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72433—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72436—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/39—Electronic components, circuits, software, systems or apparatus used in telephone systems using speech synthesis
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses a kind of information sharing method, including:Obtain for the selected text message of the reading page of mobile terminal;By the text message by phonetic synthesis generate corresponding to audio file;According to the text message and audio file, video file is generated;The message selected by the mobile terminal affiliated user shares approach, and the video file is shared.The present invention further simultaneously discloses a kind of Information Sharing device and storage medium.
Description
Technical Field
The present invention relates to content sharing technologies in the internet field, and in particular, to an information sharing method, an information sharing apparatus, and a storage medium.
Background
With the rapid development and the increasing popularity of mobile terminals, such as mobile phones, tablet computers, e-book readers and other electronic devices, more and more users read e-books by means of various mobile terminals and can read at any time and any place, thereby enjoying the convenience of reading.
At present, in the process of reading an electronic book, when a user finds that a sentence or a paragraph of words in a certain reading page of the electronic book is very brilliant and wants to share the sentence or the paragraph of words to other people, the technical implementation scheme adopted by the related technology is as follows: the method comprises the steps of selecting characters to be shared, then selecting Applications (APP) to be shared, and copying the selected characters to the APP, so that the purpose of sharing the characters to other people in the APP is achieved. Therefore, the related art can only share the text content in the electronic book, and the shared content is single, so a scheme for enriching the shared content is urgently needed.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide an information sharing method, an information sharing apparatus, and a storage medium, which can implement video sharing based on text information and enhance richness of shared content.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
the embodiment of the invention provides an information sharing method, which comprises the following steps:
acquiring text information selected aiming at a reading page of the mobile terminal;
generating a corresponding audio file by the text information through voice synthesis;
generating a video file according to the text information and the audio file;
and sharing the video file through a message sharing way selected by the user to which the mobile terminal belongs.
In the above scheme, the text information carries identification information of the electronic book to which the text information belongs;
the method further comprises the following steps: determining electronic book information corresponding to the text information according to the identification information;
correspondingly, the generating the video file comprises: and generating a video file according to the text information, the electronic book information and the audio file.
In the foregoing solution, the generating a corresponding audio file by speech synthesis of the text information includes:
locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; synthesizing an audio file corresponding to the text information by using the speech synthesis library based on pre-extracted speech features; or,
when the text information is successfully uploaded to the cloud equipment, loading a voice synthesis library, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
In the foregoing solution, the generating a video file according to the text information and an audio file includes:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
bearing the audio file on a corresponding audio track, generating a video frame of a video file according to the text information in a preset format, bearing the video frame on the corresponding video track, and synthesizing the audio file on the audio track and the video frame on the video track into the video file through a synthesizing plug-in;
and the playing time length of the video file is the playing time length of the audio file.
In the above scheme, the generating a video file according to the text information, the electronic book information, and the audio file includes:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
the audio file is borne on a corresponding audio track, the text information and the electronic book information are generated into video frames of a video file according to a preset format, the video frames are borne on the corresponding video track, and the audio file on the audio track and the video frames on the video track are synthesized into the video file through a synthesis plug-in;
and the playing time length of the video file is the playing time length of the audio file.
In the above solution, the electronic book information at least includes one of the following: attribute information of the electronic book, picture information related to the electronic book, and a two-dimensional code of the electronic book.
In the above solution, the two-dimensional code of the electronic book is generated by a Uniform Resource Locator (URL) of the electronic book; or,
the two-dimensional code of the electronic book is generated by a URL which has a corresponding relation with an application client side downloading the electronic book.
In the above scheme, the picture information related to the electronic book is the picture information generated according to the text information;
correspondingly, the determining the electronic book information corresponding to the text information includes:
creating a blank picture layer;
bearing the text information in the picture layer, and typesetting the text information in a selected format;
and filling background colors and styles into the typeset text information, and determining corresponding picture information.
An embodiment of the present invention provides an information sharing apparatus, where the apparatus includes: the device comprises an acquisition module, a generation module and a sharing module; wherein,
the acquisition module is used for acquiring text information selected by a reading page of the mobile terminal;
the generating module is used for generating a corresponding audio file by the text information through voice synthesis; the video server is also used for generating a video file according to the text information and the audio file;
the sharing module is used for sharing the video file through a message sharing way selected by a user to which the mobile terminal belongs.
In the above scheme, the text information carries identification information of the electronic book to which the text information belongs;
the device further comprises: the determining module is used for determining the electronic book information corresponding to the text information according to the identification information;
correspondingly, the generating module is specifically configured to: and generating a video file according to the text information, the electronic book information and the audio file.
In the foregoing solution, the generating module is specifically configured to:
locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; synthesizing an audio file corresponding to the text information by using the speech synthesis library based on pre-extracted speech features; or,
when the text information is successfully uploaded to the cloud equipment, loading a voice synthesis library, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
In the foregoing solution, the generating module is specifically configured to:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
bearing the audio file on a corresponding audio track, generating a video frame of a video file according to the text information in a preset format, bearing the video frame on the corresponding video track, and synthesizing the audio file on the audio track and the video frame on the video track into the video file through a synthesizing plug-in;
and the playing time length of the video file is the playing time length of the audio file.
In the foregoing solution, the generating module is specifically configured to:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
the audio file is borne on a corresponding audio track, the text information and the electronic book information are generated into video frames of a video file according to a preset format, the video frames are borne on the corresponding video track, and the audio file on the audio track and the video frames on the video track are synthesized into the video file through a synthesis plug-in;
and the playing time length of the video file is the playing time length of the audio file.
In the above scheme, the generating module is further configured to generate the two-dimensional code of the electronic book from the URL of the electronic book; or,
and generating the two-dimensional code of the electronic book by the URL which has a corresponding relation with the application client side downloading the electronic book.
In the above scheme, the picture information related to the electronic book is the picture information generated according to the text information;
correspondingly, the determining module is specifically configured to:
creating a blank picture layer;
bearing the text information in the picture layer, and typesetting the text information in a selected format;
and filling background colors and styles into the typeset text information, and determining corresponding picture information.
An embodiment of the present invention provides a storage medium, on which an executable program is stored, where the executable program, when executed by a processor, implements any of the steps of the information sharing method described above.
The embodiment of the invention also provides an information sharing device, which comprises a memory, a processor and an executable program which is stored on the memory and can be run by the processor, wherein the processor executes any step of the information sharing method when running the executable program.
According to the information sharing method, the information sharing device and the storage medium, the text information selected by the reading page of the mobile terminal is acquired; generating a corresponding audio file by the text information through voice synthesis; generating a video file according to the text information and the audio file; and sharing the video file through a message sharing way selected by the user to which the mobile terminal belongs. Therefore, the corresponding video file is generated on the basis of the text information selected by the user, the operation is simple and convenient, the richness of the shared content is enhanced, the increasing use requirements of the user can be met to a certain extent, and the use experience of the user is improved.
Drawings
Fig. 1 is a schematic flow chart of an information sharing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another information sharing method according to an embodiment of the present invention;
FIG. 3-1 is a schematic diagram of selecting text information according to an embodiment of the present invention;
FIG. 3-2 is a schematic diagram of generating an audio file according to an embodiment of the present invention;
3-3 are schematic diagrams of generating a video file according to an embodiment of the present invention;
fig. 3-4 are schematic diagrams of video file sharing according to embodiments of the present invention;
fig. 4 is a schematic functional structure diagram of an information sharing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an information sharing apparatus according to an embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
Fig. 1 is a schematic flow chart of an information sharing method according to an embodiment of the present invention, and as shown in fig. 1, an implementation flow of the information sharing method according to the embodiment of the present invention includes the following steps:
step 101: and acquiring the text information selected aiming at the reading page of the mobile terminal.
In the embodiment of the present invention, the mobile terminal may include, but is not limited to, an electronic device such as a smart phone, a tablet computer, a palm computer, an e-book reader, and the like. The text information can be all or part of text segments included in the reading pages of one or more electronic books, for example, a sentence or a paragraph of text in a certain reading page; alternatively, the text information may also be comments, scores, and the like displayed in the reading pages of one or more electronic books.
Here, if the user finds that there is a favorite word expression in the reading page, the user can autonomously select favorite text information through touch screen operations such as long-time pressing and sliding, and thus obtains the text information that the user wants to share.
Step 102: and generating a corresponding audio file by the text information through speech synthesis.
In the embodiment of the present invention, various existing or new Speech synthesis technologies may be used, for example, a Text-To-Speech (TTS) technology may be used To convert Text information into a corresponding audio file, which is not limited herein. TTS technology is mainly a speech synthesis technology which intelligently converts text information generated by a computer or input from the outside into natural speech flow through the design of a neural network under the support of an internal chip. The TTS technology can enable a user to hear clear and pleasant tone quality and have consistent and smooth tone, and can help people with visual disorder to read information on a computer and increase the readability of text documents.
Here, when the text information is generated into a corresponding audio file by speech synthesis, the text information can be generated in two different ways:
mode 1) locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
The method is that the audio file is locally synthesized at the mobile terminal, and only a user needs to download the speech synthesis library in advance and guide the selected text information into the speech synthesis library, so that the speech synthesis process can be automatically completed by the speech synthesis library, the processing pressure of a server can be reduced, and the steps of manual operation are reduced. Alternatively, the speech synthesis library may be a rhinoceros speech synthesis library. The text information may be converted into audio files in various audio formats, and the audio formats may include any type of audio formats such as MP3 format, WAV format, and the like, which is not limited herein.
In the speech synthesis process, the speech features may be the speech features of the user himself, for example, the user is allowed to read a segment of text in advance, and the speech features of the user in the reading process are extracted; the voice feature can also be preset voice features of other people, such as voice features of a star or a celebrity; of course, the voice feature may also be a voice feature of a voice guidance in the reading software, and the embodiment of the present invention is not limited herein.
Mode 2) when detecting that the text information is successfully uploaded to the cloud device, loading a speech synthesis library and importing the text information into the speech synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
Compared with the method 1), the method needs to upload the text information to the cloud device after acquiring the text information selected by the user, and then performs the subsequent speech synthesis process by using the speech synthesis library, which is not described herein again.
Therefore, although the method needs to upload the text information to the cloud device to synthesize the audio file, the local processing resource of the mobile terminal is saved. Alternatively, the speech synthesis library may be a rhinoceros speech synthesis library. Also, the method may convert the text information into audio files in various audio formats, which may include any type of audio format, such as MP3 format, WAV format, etc., without any limitation.
In the speech synthesis process, the speech features may be the speech features of the user himself, for example, the user is allowed to read a segment of text in advance, and the speech features of the user in the reading process are extracted; the voice feature can also be preset voice features of other people, such as voice features of a star or a celebrity; of course, the voice feature may also be a voice feature of a voice guidance in the reading software, and the embodiment of the present invention is not limited herein.
Step 103: and generating a video file according to the text information and the audio file.
The method specifically comprises the following steps: creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
bearing the audio file on a corresponding audio track, generating a video frame of a video file according to the text information in a preset format, bearing the video frame on the corresponding video track, and synthesizing the audio file on the audio track and the video frame on the video track into the video file through a synthesizing plug-in;
and the playing time length of the video file is the playing time length of the audio file.
Briefly, the audio track is a channel for transmitting audio files, and the video track is a channel for transmitting video files; generally, an audio file can be edited only when the audio file is added to a corresponding audio track, and similarly, a video file can be edited only when the video file is added to a corresponding video track; different video tracks are independent and do not influence each other, for example, if three video tracks exist, three different segments of videos can be played at the same time; in a similar way, different audio tracks are mutually independent and do not influence each other. Here, the number of video frames can be determined according to the total capacity of the text information, for example, assuming that the total capacity of the text information is 1000M and the capacity that each video frame can carry is 200M, the number of video frames played can be determined to be 5. Here, the layout of each video frame, that is, the contents respectively displayed at the respective positions of the video frame may be set in advance. It should be noted that the content played by each video frame may be the same or different, and the embodiment of the present invention is not limited herein.
Here, the audio file on the audio track and the Video frame on the Video track may be synthesized into a Video file using various existing or new synthetic plug-ins such as AV Video Composition plug-in of IOS (handheld device operating system developed by apple inc.). The user can select the corresponding synthetic plug-in according to the operating system and the requirement used by the communication terminal.
Step 104: and sharing the video file through a message sharing way selected by the user to which the mobile terminal belongs.
Here, the message sharing means includes at least one of: the sharing approaches include, but are not limited to, friend circles, WeChat friends, QQ spaces, QQ friends, microblogs, various video websites, and the like.
It should be noted that, when sharing a video file through a QQ space or a QQ friend, the video file needs to be stored locally in the mobile terminal first, and then the video file needs to be shared out. When the sharing operation is performed, the operation menu popped up through the video sharing control page can be used for realizing, for example, a video sharing button is clicked, and the video file is shared.
Fig. 2 is a schematic flow chart of another information sharing method according to an embodiment of the present invention, and as shown in fig. 2, an implementation flow of the information sharing method according to the embodiment of the present invention includes the following steps:
step 201: the method comprises the steps of obtaining text information selected aiming at a reading page of the mobile terminal, wherein the text information carries identification information of an electronic book to which the text information belongs.
In the embodiment of the present invention, the mobile terminal may include, but is not limited to, an electronic device such as a smart phone, a tablet computer, a palm computer, an e-book reader, and the like. The text information can be all or part of text segments included in the reading pages of one or more electronic books, for example, a sentence or a paragraph of text in a certain reading page; alternatively, the text information may also be comments, scores, and the like displayed in the reading pages of one or more electronic books.
Here, if the user finds that there is a favorite word expression in the reading page, the user can autonomously select favorite text information through touch screen operations such as long-time pressing and sliding, and thus obtains the text information that the user wants to share.
Step 202: and determining the electronic book information corresponding to the text information according to the identification information.
The text information carries identification information for distinguishing the electronic books, and the selected text information belongs to the electronic books and the electronic book information corresponding to the selected text information can be quickly and accurately judged by using the identification information.
In an embodiment of the present invention, the electronic book information may include at least one of: attribute information of the electronic book, picture information related to the electronic book, two-dimensional code of the electronic book, and the like. Wherein the attribute information of the electronic book comprises: information such as the title of the electronic book, the author of the electronic book, the release date of the electronic book or the category of the electronic book; the picture information related to the electronic book comprises: and generating picture information according to the text information or illustration information carried in the electronic book, which is not limited herein.
Here, optionally, the attribute field of the electronic book may be directly obtained from the cache of the mobile terminal according to the currently opened mobile terminal, such as the electronic book, and then the attribute information of the electronic book may be obtained according to the obtained attribute field of the electronic book.
Here, when the picture information related to the electronic book is the picture information generated according to the text information, the determining the electronic book information corresponding to the text information includes:
creating a blank picture layer;
bearing the text information in the picture layer, and typesetting the text information in a selected format;
and filling background colors and styles into the typeset text information, and determining corresponding picture information.
Specifically, after a blank picture layer is created, the text information is carried in the picture layer; then, setting a format for typesetting the selected text information, such as horizontal typesetting or vertical typesetting; then, typesetting the text information according to the set typesetting format; after the typesetting is finished, the background color and the font style can be selected, the background color and the style are filled in the typeset text information, and the corresponding picture information is determined. It should be noted that, when determining the picture information generated from the text information, it is also possible to select a book name or an author name to be filled in the control operation area, and display the selected book name or author name in the picture. In addition, the two-dimensional code of the electronic book corresponding to the text information can be generated, and then the two-dimensional code is further scanned through the terminal so as to identify the corresponding picture information.
Here, the two-dimensional code of the electronic book may be generated in two different ways:
mode 1) is generated from the URL of the electronic book.
For this way, each electronic book has a unique corresponding resource address, i.e. URL, so the two-dimensional code corresponding to the electronic book can be generated by the URL fixedly provided by each electronic book.
Mode 2) is generated by a URL that has a correspondence with the application client that downloaded the electronic book.
For this way, in practical applications, the electronic book is generally downloaded through the URL, so that there is a corresponding relationship between the application client that downloads the electronic book and the URL, and therefore, the two-dimensional code of the electronic book is generated according to the URL that has a corresponding relationship with the application client that downloads the electronic book, such as the URL that has a corresponding relationship with the downloading migu reading APP.
Step 203: and generating a corresponding audio file by the text information through speech synthesis.
In the embodiment of the present invention, various existing or new speech synthesis technologies may be adopted, for example, the text information may be converted into a corresponding audio file by a TTS technology, which is not limited herein. TTS technology is mainly a speech synthesis technology which intelligently converts text information generated by a computer or input from the outside into natural speech flow through the design of a neural network under the support of an internal chip. The TTS technology can enable a user to hear clear and pleasant tone quality and have consistent and smooth tone, and can help people with visual disorder to read information on a computer and increase the readability of text documents.
Here, when the text information is generated into a corresponding audio file by speech synthesis, the text information can be generated in two different ways:
mode 1) locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
The method is that the audio file is locally synthesized at the mobile terminal, and only a user needs to download the speech synthesis library in advance and guide the selected text information into the speech synthesis library, so that the speech synthesis process can be automatically completed by the speech synthesis library, the processing pressure of a server can be reduced, and the steps of manual operation are reduced. Alternatively, the speech synthesis library may be a rhinoceros speech synthesis library. The text information may be converted into audio files in various audio formats, and the audio formats may include any type of audio formats such as MP3 format, WAV format, and the like, which is not limited herein.
In the speech synthesis process, the speech features may be the speech features of the user himself, for example, the user is allowed to read a segment of text in advance, and the speech features of the user in the reading process are extracted; the voice feature can also be preset voice features of other people, such as voice features of a star or a celebrity; of course, the voice feature may also be a voice feature of a voice guidance in the reading software, and the embodiment of the present invention is not limited herein.
Mode 2) when detecting that the text information is successfully uploaded to the cloud device, loading a speech synthesis library and importing the text information into the speech synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
Compared with the method 1), the method needs to upload the text information to the cloud device after acquiring the text information selected by the user, and then performs the subsequent speech synthesis process by using the speech synthesis library, which is not described herein again.
Therefore, although the method needs to upload the text information to the cloud device to synthesize the audio file, the local processing resource of the mobile terminal is saved. Alternatively, the speech synthesis library may be a rhinoceros speech synthesis library. Also, the method may convert the text information into audio files in various audio formats, which may include any type of audio format, such as MP3 format, WAV format, etc., without any limitation.
In the speech synthesis process, the speech features may be the speech features of the user himself, for example, the user is allowed to read a segment of text in advance, and the speech features of the user in the reading process are extracted; the voice feature can also be preset voice features of other people, such as voice features of a star or a celebrity; of course, the voice feature may also be a voice feature of a voice guidance in the reading software, and the embodiment of the present invention is not limited herein.
It should be noted that, in the embodiment of the present invention, the execution sequence of step 202 and step 203 is not limited, that is, step 202 may be executed first, and then step 203 may be executed; step 203 may be performed first, and then step 202 may be performed.
Step 204: and generating a video file according to the text information, the electronic book information and the audio file.
The method specifically comprises the following steps: creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
the audio file is borne on a corresponding audio track, the text information and the electronic book information are generated into video frames of a video file according to a preset format, the video frames are borne on the corresponding video track, and the audio file on the audio track and the video frames on the video track are synthesized into the video file through a synthesis plug-in;
and the playing time length of the video file is the playing time length of the audio file.
Briefly, the audio track is a channel for transmitting audio files, and the video track is a channel for transmitting video files; generally, an audio file can be edited only when the audio file is added to a corresponding audio track, and similarly, a video file can be edited only when the video file is added to a corresponding video track; different video tracks are independent and do not influence each other, for example, if three video tracks exist, three different segments of videos can be played at the same time; in a similar way, different audio tracks are mutually independent and do not influence each other. Here, the number of video frames may be determined according to the total capacity of the text information and the electronic book information, for example, assuming that the total capacity of the text information and the electronic book information is 1000M and the capacity that each video frame can carry is 200M, it may be determined that the number of video frames played is 5. Here, the layout of each video frame, that is, the contents respectively displayed at the respective positions of the video frame may be set in advance. It should be noted that the content played by each video frame may be the same or different, and the embodiment of the present invention is not limited herein.
Here, the audio file on the audio track and the Video frame on the Video track may be synthesized into a Video file using various existing or new synthesis plug-ins, such as AV Video Composition plug-ins of IOS. The user can select the corresponding synthetic plug-in according to the operating system and the requirement used by the communication terminal.
Step 205: and sharing the video file through a message sharing way selected by the user to which the mobile terminal belongs.
Here, the message sharing means includes at least one of: the sharing approaches include, but are not limited to, friend circles, WeChat friends, QQ spaces, QQ friends, microblogs, various video websites, and the like.
It should be noted that, when sharing a video file through a QQ space or a QQ friend, the video file needs to be stored locally in the mobile terminal first, and then the video file needs to be shared out. When the sharing operation is performed, the operation menu popped up through the video sharing control page can be used for realizing, for example, a video sharing button is clicked, and the video file is shared.
Compared with the mode of copying the selected characters into the APP and sharing the characters with other people in the APP in the related technology, the embodiment of the invention provides the scheme for realizing content sharing based on the text information selected by the user independently, namely, the corresponding video file is generated based on the text information selected by the user.
Based on the information sharing method described in fig. 2, a specific implementation process of the information sharing method according to the embodiment of the present invention is further described in detail in the following with a specific embodiment.
In the embodiment of the present invention, it is assumed that the content of the text message selected by the user on a certain reading page of the mobile terminal is "tuning channel for bakelite button of a fat drum" as shown in fig. 3-1. Usually, we listen to a relaxing music channel, but "the text information is generated into a corresponding audio file as shown in fig. 3-2 by a speech synthesis technology such as TTS technology, and the playing time of the audio file is 10 seconds; then, determining the electronic book information corresponding to the text information according to the identification information carried in the text information, for example, determining that the book name of the electronic book to which the text information belongs is 'passing through from your world', the author of the electronic book is Zhang Jia, and the two-dimensional code of the electronic book; then, the audio file and the determined electronic book information such as the title, the author, the two-dimensional code and the like of the electronic book are synthesized into a video file through a synthesis plugin, as shown in fig. 3-3, only a display picture of one video frame is shown here, and the video frame can be repeatedly played when other video frames are played, and also other information of the electronic book, such as picture information related to the electronic book and the like, can be played; after the video file is generated, the video file can be shared out through various message sharing approaches selected by the user to which the mobile terminal belongs, such as the approaches shown in fig. 3-4. The method can be realized through an operation menu popped up by the video sharing control page, for example, a video sharing button is clicked to share out the video file.
In order to implement the information sharing method, an embodiment of the present invention further provides an information sharing device, as shown in fig. 4, fig. 4 is a functional structure schematic diagram of the information sharing device provided in the embodiment of the present invention; the device comprises an acquisition module 401, a generation module 402 and a sharing module 403; wherein,
the obtaining module 401 is configured to obtain text information selected for a reading page of the mobile terminal;
the generating module 402 is configured to generate a corresponding audio file from the text information through speech synthesis;
the generating module 402 is further configured to generate a video file according to the text information and the audio file;
the sharing module 403 is configured to share the video file through a message sharing path selected by a user to which the mobile terminal belongs.
Here, the text information carries identification information of the electronic book to which the text information belongs;
the device further comprises: a determining module 404, configured to determine, according to the identification information, electronic book information corresponding to the text information;
correspondingly, the generating module 402 is specifically configured to: and generating a video file according to the text information, the electronic book information and the audio file.
Here, for the generating module 402 to generate the corresponding audio file by performing speech synthesis on the text information, the method is specifically configured to:
locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; synthesizing an audio file corresponding to the text information by using the speech synthesis library based on pre-extracted speech features; or,
when the text information is successfully uploaded to the cloud equipment, loading a voice synthesis library, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
Here, for the generating module 402 to generate the video file according to the text information and the audio file, the generating module is specifically configured to:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
bearing the audio file on a corresponding audio track, generating a video frame of a video file according to the text information in a preset format, bearing the video frame on the corresponding video track, and synthesizing the audio file on the audio track and the video frame on the video track into the video file through a synthesizing plug-in;
and the playing time length of the video file is the playing time length of the audio file.
Here, for the generating module 402 to generate the video file according to the text information, the electronic book information, and the audio file, specifically, the generating module is configured to:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
the audio file is borne on a corresponding audio track, the text information and the electronic book information are generated into video frames of a video file according to a preset format, the video frames are borne on the corresponding video track, and the audio file on the audio track and the video frames on the video track are synthesized into the video file through a synthesis plug-in;
and the playing time length of the video file is the playing time length of the audio file.
Here, the electronic book information includes at least one of: attribute information of the electronic book, picture information related to the electronic book, and a two-dimensional code of the electronic book.
Wherein the picture information related to the electronic book comprises: and generating picture information according to the text information or illustration information carried in the electronic book.
Here, the generating module 402 is further configured to generate a two-dimensional code of the electronic book from the URL of the electronic book; or,
and generating the two-dimensional code of the electronic book by the URL which has a corresponding relation with the application client side downloading the electronic book.
Here, the picture information related to the electronic book is picture information generated according to the text information;
correspondingly, the determining module 404 is specifically configured to:
creating a blank picture layer;
bearing the text information in the picture layer, and typesetting the text information in a selected format;
and filling background colors and styles into the typeset text information, and determining corresponding picture information.
In practical applications, the obtaining module 401, the generating module 402, the sharing module 403, and the determining module 404 may be implemented by a Central Processing Unit (CPU), a MicroProcessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like located on a terminal device.
It should be noted that: in the information sharing apparatus provided in the above embodiment, only the division of the program modules is exemplified when information sharing is performed, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the apparatus is divided into different program modules, so as to complete all or part of the processing described above. In addition, the information sharing apparatus and the information sharing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
In order to implement the information sharing method, an embodiment of the present invention further provides an information sharing device, where the information sharing device includes a memory, a processor, and an executable program that is stored in the memory and can be run by the processor, and when the processor runs the executable program, the information sharing method provided in the embodiment of the present invention is executed, for example, the information sharing method shown in fig. 1 or fig. 2.
An information sharing apparatus that implements an embodiment of the present invention, which may be implemented in various forms, for example, various types of computer devices such as a terminal device, a desktop computer, a notebook computer, a smartphone, or an electronic book reader, will now be described with reference to the accompanying drawings. In the following, further description is made on a hardware structure of the information sharing device according to the embodiment of the present invention, it is to be understood that fig. 5 only shows an exemplary structure of the information sharing device, and not a whole structure, and a part of the structure or a whole structure shown in fig. 5 may be implemented as needed.
Referring to fig. 5, fig. 5 is a schematic diagram of a hardware structure of an information sharing apparatus according to an embodiment of the present invention, which may be applied to various terminal devices running an application program in practical applications, where the information sharing apparatus 500 shown in fig. 5 includes: at least one processor 501, memory 502, a user interface 503, and at least one network interface 504. The various components of the information sharing device 500 are coupled together by a bus system 505. It will be appreciated that the bus system 505 is used to enable communications among the components of the connection. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like, among others.
It will be appreciated that the memory 502 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory.
The memory 502 of the embodiment of the present invention is used for storing various types of data to support the operation of the information sharing apparatus 500. Examples of such data include: any computer program for operating on the information sharing apparatus 500, such as the executable program 5021 and the operating system 5022, may be included in the executable program 5021 to implement the information sharing method according to the embodiment of the present invention.
The information sharing method disclosed by the embodiment of the invention can be applied to the processor 501, or can be realized by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In the implementation process, the steps of the information sharing method may be implemented by an integrated logic circuit of hardware in the processor 501 or instructions in the form of software. The processor 501 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 501 may implement or perform the methods, steps, and logic blocks provided in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the information sharing method provided by the embodiment of the invention can be directly embodied as the execution of a hardware decoding processor, or the combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the memory 502, and the processor 501 reads information in the memory 502, and completes the steps of the information sharing method provided in the embodiment of the present invention in combination with hardware thereof.
In an exemplary embodiment, an embodiment of the present invention further provides a storage medium, where an executable program 5021 is stored on the storage medium, and when the executable program 5021 is executed by a processor 501 in an information sharing apparatus 500, the information sharing method provided by the embodiment of the present invention is implemented, for example, the information sharing method shown in fig. 1 or fig. 2. The storage medium provided by the embodiment of the invention can be a storage medium such as an optical disk, a flash memory or a magnetic disk, and can be selected as a non-instantaneous storage medium.
The embodiment of the invention obtains the text information selected aiming at the reading page of the mobile terminal; generating a corresponding audio file by the text information through voice synthesis; generating a video file according to the text information and the audio file; and sharing the video file through a message sharing way selected by the user to which the mobile terminal belongs. Therefore, the corresponding video file is generated on the basis of the text information selected by the user, the operation is simple and convenient, the richness of the shared content is enhanced, the increasing use requirements of the user can be met to a certain extent, and the use experience of the user is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or executable program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of an executable program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and executable program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by executable program instructions. These executable program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor with reference to a programmable data processing apparatus to produce a machine, such that the instructions, which execute via the computer or processor with reference to the programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These executable program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These executable program instructions may also be loaded onto a computer or reference programmable data processing apparatus to cause a series of operational steps to be performed on the computer or reference programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or reference programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.
Claims (18)
1. An information sharing method, the method comprising:
acquiring text information selected aiming at a reading page of the mobile terminal;
generating a corresponding audio file by the text information through voice synthesis;
generating a video file according to the text information and the audio file;
and sharing the video file through a message sharing way selected by the user to which the mobile terminal belongs.
2. The information sharing method according to claim 1, wherein the text information carries identification information of the electronic book to which the text information belongs;
the method further comprises the following steps: determining electronic book information corresponding to the text information according to the identification information;
correspondingly, the generating the video file comprises: and generating a video file according to the text information, the electronic book information and the audio file.
3. The information sharing method according to claim 1 or 2, wherein the generating of the corresponding audio file from the text information by speech synthesis includes:
locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; synthesizing an audio file corresponding to the text information by using the speech synthesis library based on pre-extracted speech features; or,
when the text information is successfully uploaded to the cloud equipment, loading a voice synthesis library, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
4. The information sharing method according to claim 1, wherein the generating a video file according to the text information and an audio file comprises:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
bearing the audio file on a corresponding audio track, generating a video frame of a video file according to the text information in a preset format, bearing the video frame on the corresponding video track, and synthesizing the audio file on the audio track and the video frame on the video track into the video file through a synthesizing plug-in;
and the playing time length of the video file is the playing time length of the audio file.
5. The information sharing method according to claim 2, wherein the generating a video file according to the text information, the electronic book information, and an audio file includes:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
the audio file is borne on a corresponding audio track, the text information and the electronic book information are generated into video frames of a video file according to a preset format, the video frames are borne on the corresponding video track, and the audio file on the audio track and the video frames on the video track are synthesized into the video file through a synthesis plug-in;
and the playing time length of the video file is the playing time length of the audio file.
6. The information sharing method according to claim 2, wherein the electronic book information includes at least one of: attribute information of the electronic book, picture information related to the electronic book, and a two-dimensional code of the electronic book.
7. The information sharing method according to claim 6, wherein the two-dimensional code of the electronic book is generated by a uniform resource locator URL of the electronic book; or,
the two-dimensional code of the electronic book is generated by a URL which has a corresponding relation with an application client side downloading the electronic book.
8. The information sharing method according to claim 6, wherein the picture information related to the electronic book is picture information generated according to the text information;
correspondingly, the determining the electronic book information corresponding to the text information includes:
creating a blank picture layer;
bearing the text information in the picture layer, and typesetting the text information in a selected format;
and filling background colors and styles into the typeset text information, and determining corresponding picture information.
9. An information sharing apparatus, the apparatus comprising: the device comprises an acquisition module, a generation module and a sharing module; wherein,
the acquisition module is used for acquiring text information selected by a reading page of the mobile terminal;
the generating module is used for generating a corresponding audio file by the text information through voice synthesis; the video server is also used for generating a video file according to the text information and the audio file;
the sharing module is used for sharing the video file through a message sharing way selected by a user to which the mobile terminal belongs.
10. The information sharing device according to claim 9, wherein the text information carries identification information of the electronic book to which the text information belongs;
the device further comprises: the determining module is used for determining the electronic book information corresponding to the text information according to the identification information;
correspondingly, the generating module is specifically configured to: and generating a video file according to the text information, the electronic book information and the audio file.
11. The information sharing apparatus according to claim 9 or 10, wherein the generating module is specifically configured to:
locally loading a voice synthesis library from a mobile terminal, and importing the text information into the voice synthesis library; synthesizing an audio file corresponding to the text information by using the speech synthesis library based on pre-extracted speech features; or,
when the text information is successfully uploaded to the cloud equipment, loading a voice synthesis library, and importing the text information into the voice synthesis library; and synthesizing an audio file corresponding to the text information by using the speech synthesis library based on the pre-extracted speech features.
12. The information sharing apparatus according to claim 9, wherein the generating module is specifically configured to:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
bearing the audio file on a corresponding audio track, generating a video frame of a video file according to the text information in a preset format, bearing the video frame on the corresponding video track, and synthesizing the audio file on the audio track and the video frame on the video track into the video file through a synthesizing plug-in;
and the playing time length of the video file is the playing time length of the audio file.
13. The information sharing apparatus according to claim 10, wherein the generating module is specifically configured to:
creating at least one audio track and at least one video track, and determining the playing time length of the audio file;
the audio file is borne on a corresponding audio track, the text information and the electronic book information are generated into video frames of a video file according to a preset format, the video frames are borne on the corresponding video track, and the audio file on the audio track and the video frames on the video track are synthesized into the video file through a synthesis plug-in;
and the playing time length of the video file is the playing time length of the audio file.
14. The information sharing apparatus according to claim 10, wherein the electronic book information includes at least one of: attribute information of the electronic book, picture information related to the electronic book, and a two-dimensional code of the electronic book.
15. The information sharing apparatus according to claim 14, wherein the generating module is further configured to generate a two-dimensional code of the electronic book from a URL of the electronic book; or,
and generating the two-dimensional code of the electronic book by the URL which has a corresponding relation with the application client side downloading the electronic book.
16. The information sharing apparatus according to claim 14, wherein the picture information related to the electronic book is picture information generated according to the text information;
correspondingly, the determining module is specifically configured to:
creating a blank picture layer;
bearing the text information in the picture layer, and typesetting the text information in a selected format;
and filling background colors and styles into the typeset text information, and determining corresponding picture information.
17. A storage medium having an executable program stored thereon, wherein the executable program when executed by a processor implements the steps of the information sharing method according to any one of claims 1 to 8.
18. An information sharing apparatus comprising a memory, a processor and an executable program stored on the memory and capable of being executed by the processor, wherein the processor executes the executable program to perform the steps of the information sharing method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710806628.4A CN107517323B (en) | 2017-09-08 | 2017-09-08 | Information sharing method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710806628.4A CN107517323B (en) | 2017-09-08 | 2017-09-08 | Information sharing method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107517323A true CN107517323A (en) | 2017-12-26 |
CN107517323B CN107517323B (en) | 2019-12-24 |
Family
ID=60725196
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710806628.4A Active CN107517323B (en) | 2017-09-08 | 2017-09-08 | Information sharing method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107517323B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647197A (en) * | 2018-05-08 | 2018-10-12 | 腾讯科技(深圳)有限公司 | A kind of information processing method, device and storage medium |
CN109195007A (en) * | 2018-10-19 | 2019-01-11 | 深圳市轱辘汽车维修技术有限公司 | Video generation method, device, server and computer readable storage medium |
CN109496295A (en) * | 2018-05-31 | 2019-03-19 | 优视科技新加坡有限公司 | Multimedia content generation method, device and equipment/terminal/server |
CN109597951A (en) * | 2018-12-05 | 2019-04-09 | 广州酷狗计算机科技有限公司 | Information sharing method, device, terminal and storage medium |
CN109992754A (en) * | 2017-12-29 | 2019-07-09 | 上海全土豆文化传播有限公司 | Document processing method and device |
CN110381214A (en) * | 2019-07-26 | 2019-10-25 | 上海秘墟科技有限公司 | A kind of online text reading and transfer approach |
CN112004137A (en) * | 2020-09-01 | 2020-11-27 | 天脉聚源(杭州)传媒科技有限公司 | Intelligent video creation method and device |
CN112199924A (en) * | 2019-06-19 | 2021-01-08 | 珠海金山办公软件有限公司 | Method and device for outputting text as picture |
CN114201137A (en) * | 2021-12-08 | 2022-03-18 | 掌阅科技股份有限公司 | Audio sharing method corresponding to text content, computing device and storage medium |
CN114598893A (en) * | 2020-11-19 | 2022-06-07 | 京东方科技集团股份有限公司 | Text video implementation method and system, electronic equipment and storage medium |
CN114979054A (en) * | 2022-05-13 | 2022-08-30 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
WO2024082948A1 (en) * | 2022-10-21 | 2024-04-25 | 北京字跳网络技术有限公司 | Multimedia data processing method, apparatus, device and medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050165726A1 (en) * | 2003-10-30 | 2005-07-28 | Pepper Computer, Inc. | Packaged, organized collections of digital information, and mechanisms and methods for navigating and sharing the collection |
US20090132918A1 (en) * | 2007-11-20 | 2009-05-21 | Microsoft Corporation | Community-based software application help system |
CN102355634A (en) * | 2011-06-29 | 2012-02-15 | 惠州Tcl移动通信有限公司 | Document transmission method and mobile phone thereof |
US20120209902A1 (en) * | 2011-02-11 | 2012-08-16 | Glenn Outerbridge | Digital Media and Social Networking System and Method |
CN103092941A (en) * | 2013-01-10 | 2013-05-08 | 北京奇虎科技有限公司 | Method and device showing content on electronic equipment |
CN203206476U (en) * | 2013-05-06 | 2013-09-18 | 重庆昇通科技有限公司 | Data content generating and sharing system based on dual-network |
US20140006914A1 (en) * | 2011-12-10 | 2014-01-02 | University Of Notre Dame Du Lac | Systems and methods for collaborative and multimedia-enriched reading, teaching and learning |
CN104780209A (en) * | 2015-04-07 | 2015-07-15 | 北京奇点机智信息技术有限公司 | Portable equipment and server for realizing sharing interface scenario |
CN106782494A (en) * | 2016-09-13 | 2017-05-31 | 乐视控股(北京)有限公司 | Phonetic synthesis processing method and processing device |
CN106815316A (en) * | 2016-12-23 | 2017-06-09 | 北京奇虎科技有限公司 | Method, device and mobile terminal that content of pages is shared |
-
2017
- 2017-09-08 CN CN201710806628.4A patent/CN107517323B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050165726A1 (en) * | 2003-10-30 | 2005-07-28 | Pepper Computer, Inc. | Packaged, organized collections of digital information, and mechanisms and methods for navigating and sharing the collection |
US20090132918A1 (en) * | 2007-11-20 | 2009-05-21 | Microsoft Corporation | Community-based software application help system |
US20120209902A1 (en) * | 2011-02-11 | 2012-08-16 | Glenn Outerbridge | Digital Media and Social Networking System and Method |
CN102355634A (en) * | 2011-06-29 | 2012-02-15 | 惠州Tcl移动通信有限公司 | Document transmission method and mobile phone thereof |
US20140006914A1 (en) * | 2011-12-10 | 2014-01-02 | University Of Notre Dame Du Lac | Systems and methods for collaborative and multimedia-enriched reading, teaching and learning |
CN103092941A (en) * | 2013-01-10 | 2013-05-08 | 北京奇虎科技有限公司 | Method and device showing content on electronic equipment |
CN203206476U (en) * | 2013-05-06 | 2013-09-18 | 重庆昇通科技有限公司 | Data content generating and sharing system based on dual-network |
CN104780209A (en) * | 2015-04-07 | 2015-07-15 | 北京奇点机智信息技术有限公司 | Portable equipment and server for realizing sharing interface scenario |
CN106782494A (en) * | 2016-09-13 | 2017-05-31 | 乐视控股(北京)有限公司 | Phonetic synthesis processing method and processing device |
CN106815316A (en) * | 2016-12-23 | 2017-06-09 | 北京奇虎科技有限公司 | Method, device and mobile terminal that content of pages is shared |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109992754A (en) * | 2017-12-29 | 2019-07-09 | 上海全土豆文化传播有限公司 | Document processing method and device |
CN108647197A (en) * | 2018-05-08 | 2018-10-12 | 腾讯科技(深圳)有限公司 | A kind of information processing method, device and storage medium |
CN108647197B (en) * | 2018-05-08 | 2021-07-27 | 腾讯科技(深圳)有限公司 | Information processing method, device and storage medium |
CN109496295A (en) * | 2018-05-31 | 2019-03-19 | 优视科技新加坡有限公司 | Multimedia content generation method, device and equipment/terminal/server |
CN109195007A (en) * | 2018-10-19 | 2019-01-11 | 深圳市轱辘汽车维修技术有限公司 | Video generation method, device, server and computer readable storage medium |
CN109195007B (en) * | 2018-10-19 | 2021-09-07 | 深圳市轱辘车联数据技术有限公司 | Video generation method, device, server and computer readable storage medium |
CN109597951B (en) * | 2018-12-05 | 2021-07-02 | 广州酷狗计算机科技有限公司 | Information sharing method and device, terminal and storage medium |
CN109597951A (en) * | 2018-12-05 | 2019-04-09 | 广州酷狗计算机科技有限公司 | Information sharing method, device, terminal and storage medium |
CN112199924A (en) * | 2019-06-19 | 2021-01-08 | 珠海金山办公软件有限公司 | Method and device for outputting text as picture |
CN110381214A (en) * | 2019-07-26 | 2019-10-25 | 上海秘墟科技有限公司 | A kind of online text reading and transfer approach |
CN112004137A (en) * | 2020-09-01 | 2020-11-27 | 天脉聚源(杭州)传媒科技有限公司 | Intelligent video creation method and device |
CN114598893A (en) * | 2020-11-19 | 2022-06-07 | 京东方科技集团股份有限公司 | Text video implementation method and system, electronic equipment and storage medium |
CN114598893B (en) * | 2020-11-19 | 2024-04-30 | 京东方科技集团股份有限公司 | Text video realization method and system, electronic equipment and storage medium |
CN114201137A (en) * | 2021-12-08 | 2022-03-18 | 掌阅科技股份有限公司 | Audio sharing method corresponding to text content, computing device and storage medium |
CN114979054A (en) * | 2022-05-13 | 2022-08-30 | 维沃移动通信有限公司 | Video generation method and device, electronic equipment and readable storage medium |
WO2024082948A1 (en) * | 2022-10-21 | 2024-04-25 | 北京字跳网络技术有限公司 | Multimedia data processing method, apparatus, device and medium |
Also Published As
Publication number | Publication date |
---|---|
CN107517323B (en) | 2019-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107517323B (en) | Information sharing method and device and storage medium | |
US11720949B2 (en) | Method and device for recommending gift and mobile terminal | |
US20240107127A1 (en) | Video display method and apparatus, video processing method, apparatus, and system, device, and medium | |
US20160085786A1 (en) | Transforming data into consumable content | |
US20180130496A1 (en) | Method and system for auto-generation of sketch notes-based visual summary of multimedia content | |
KR102117433B1 (en) | Interactive video generation | |
WO2014015086A2 (en) | Creating variations when transforming data into consumable content | |
US20140164371A1 (en) | Extraction of media portions in association with correlated input | |
CN111178056A (en) | Deep learning based file generation method and device and electronic equipment | |
US20150356060A1 (en) | Computer system and method for automatedly writing a user's autobiography | |
US20230291978A1 (en) | Subtitle processing method and apparatus of multimedia file, electronic device, and computer-readable storage medium | |
JP6603925B1 (en) | Movie editing server and program | |
KR102353797B1 (en) | Method and system for suppoting content editing based on real time generation of synthesized sound for video content | |
US10965629B1 (en) | Method for generating imitated mobile messages on a chat writer server | |
KR20140025082A (en) | Sns system and method for manufacturing digital audio book | |
CN117493593A (en) | Multi-terminal fusion lecture presentation method and system | |
KR20140031438A (en) | Apparatus and method of reconstructing mobile contents | |
KR20130076852A (en) | Method for creating educational contents for foreign languages and terminal therefor | |
CN113438532B (en) | Video processing method, video playing method, video processing device, video playing device, electronic equipment and storage medium | |
CN113360127B (en) | Audio playing method and electronic equipment | |
KR101124798B1 (en) | Apparatus and method for editing electronic picture book | |
US12010386B2 (en) | System and method for providing digital graphics and associated audiobooks | |
KR102251513B1 (en) | Method and apparatus for generating contents for learning based on celeb's social media information using machine learning | |
KR102720229B1 (en) | Web novel-specific treatment generation program using natural language processing AI | |
CN115237248B (en) | Virtual object display method, device, equipment, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |