CN111813301B - Content playing method and device, electronic equipment and readable storage medium - Google Patents

Content playing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111813301B
CN111813301B CN202010497027.1A CN202010497027A CN111813301B CN 111813301 B CN111813301 B CN 111813301B CN 202010497027 A CN202010497027 A CN 202010497027A CN 111813301 B CN111813301 B CN 111813301B
Authority
CN
China
Prior art keywords
target
content
user
information
operation information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010497027.1A
Other languages
Chinese (zh)
Other versions
CN111813301A (en
Inventor
董秋敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010497027.1A priority Critical patent/CN111813301B/en
Publication of CN111813301A publication Critical patent/CN111813301A/en
Application granted granted Critical
Publication of CN111813301B publication Critical patent/CN111813301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a content playing method and device, electronic equipment and a readable storage medium. The content playing method comprises the following steps: acquiring target information in a playlist; acquiring target content corresponding to the target information according to the target information; wherein the target content comprises textual content; converting the text content in the target content into voice content for playing; wherein the target information includes: target operation information or uniform resource locator information corresponding to the target content; the target operation information is operation information for acquiring the target content. The technical scheme provided by the embodiment of the application can enrich the audio books which can be listened by the user, so that the audio books can cover the books which the user wants to read as much as possible.

Description

Content playing method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a content playing method and device, an electronic device and a readable storage medium.
Background
Some audio-based applications in the prior art provide the user with audio books of many different materials, such as a spoken novel, a spoken fairy story, etc. In addition, some application programs can provide text versions and corresponding audio readings at the same time.
The audio book can assist the user in reading, so that the user can learn the content information in the text to be read in a listening mode under the condition that the user is inconvenient to read (such as walking on the road, busy in other things or tired eyes, and the like), so as to meet the reading requirement of the user.
However, in the prior art, although the audio application provides a user with audio books with different subjects, it is still difficult to cover all the books that the user wants to read.
Disclosure of Invention
An object of the embodiments of the present application is to provide a content playing method, device, electronic device and readable storage medium, which can solve the problem in the prior art that the audio books provided for users are limited.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a content playing method, where the content playing method includes:
acquiring target information in a playlist;
acquiring target content corresponding to the target information according to the target information; wherein the target content comprises textual content;
converting the text content in the target content into voice content for playing;
wherein the target information includes: target operation information or uniform resource locator information corresponding to the target content; the target operation information is operation information for acquiring the target content.
In a second aspect, an embodiment of the present application further provides a content playing apparatus, where the content playing apparatus includes:
the first acquisition module is used for acquiring target information in the playlist;
the second acquisition module is used for acquiring target content corresponding to the target information according to the target information acquired by the first acquisition module; wherein the target content comprises textual content;
the playing module is used for converting the text content in the target content into voice content to be played;
wherein the target information includes: target operation information or uniform resource locator information corresponding to the target content; the target operation information is operation information for acquiring the target content.
In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes: a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the content playing method according to the first aspect when executing the computer program.
In a fourth aspect, the present invention further provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the steps in the content playing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the content playing method according to the first aspect.
In the embodiment of the application, a user may add, in advance, target information (e.g., target operation information or uniform resource locator information corresponding to target content) corresponding to text content to be read in the same application program or different application programs to a playlist according to own needs and preferences. When a user wants to obtain information in the text content to be read in a listening mode, the electronic equipment can obtain the corresponding text content to be read through the target information in the playlist, and then converts characters in the text content to be read into voice content on line for playing, so that instant audio resources are provided for the user, and the user is assisted in reading. By the method, the audio books which can be listened by the user can be enriched, and the audio books can cover the books which the user wants to read as much as possible.
Drawings
Fig. 1 is a schematic flow chart diagram of a content playing method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an AI assistant interface of an electronic device provided by an embodiment of the application;
FIG. 3 is a schematic diagram of a playlist provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an information adding interface provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a desktop interface of an electronic device provided by an embodiment of the present application;
FIG. 6 is a second schematic diagram of an information adding interface provided in the present application;
fig. 7 is a second schematic diagram of a playlist provided in an embodiment of the present application;
FIG. 8 is a third schematic diagram of an information adding interface provided by an embodiment of the present application;
fig. 9 is a third schematic diagram of a playlist provided in an embodiment of the present application;
FIG. 10 is one of the schematic diagrams of an article display interface provided by an embodiment of the present application;
FIG. 11 is a second schematic diagram of an article display interface provided in an embodiment of the present application;
FIG. 12 is a system framework diagram of an assistive reading function provided by embodiments of the present application;
fig. 13 is a schematic block diagram of a content playing apparatus provided in an embodiment of the present application;
FIG. 14 is a block diagram of an electronic device provided by an embodiment of the application;
fig. 15 is a second schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The content playing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flow chart of a content playing method provided in an embodiment of the present application. The content playing method is applied to the electronic equipment.
As shown in fig. 1, the content playing method may include:
step 101: target information in a playlist is obtained.
The target information mentioned here is pre-added to the playlist, and is used to obtain information of the target content, and the target information may include but is not limited to: target operation information or Uniform Resource Locator (URL) information (hereinafter, URL information) corresponding to the target content. The target operation information described here is operation information for acquiring target content.
Step 102: and acquiring target content corresponding to the target information according to the target information.
In this step, the electronic device may obtain the corresponding target content according to the target information in the playlist. For example, in the case that the acquired information is URL information, the corresponding target content may be found according to the URL information.
The target content described herein includes text content. The textual content may include, but is not limited to: electronic books, electronic documents (e.g., electronic notes, electronic notepads, electronic documents, electronic letters, electronic versions of paper documents, etc.), news information, articles, and other electronic readings.
Step 103: and converting the text content in the target content into voice content for playing.
In this step, the electronic device may convert the text content into a voice content in real time and play the voice content in real time according to a text Recognition technology (such as Optical Character Recognition, OCR for short) and a voice synthesis technology under the condition that it is determined that the obtained target content includes the text content (i.e., the text content to be read), that is, the text content is read aloud, so that the user may obtain information in the text content in a listening manner under the condition that the user is not convenient to read. It can be understood that when the text content is converted into the voice content to be played, all the text content may be converted into the voice first, and then the complete voice obtained through the conversion is played, and the specific situation may be set according to the actual requirement.
With the content playing method provided by the embodiment of the application, a user can add target information corresponding to text content to be read in the same application program or different application programs to a playlist in advance according to own requirements and preferences. When a user wants to know the information of the text content to be read in a listening mode, the electronic equipment can acquire the corresponding text content to be read through the target information in the playlist, and then converts characters in the text content to be read into voice on line for playing, so that instant audio resources are provided for the user and the user is assisted in reading. By the method, the audio books which can be listened by the user can be enriched, and the audio books can cover the books which the user wants to read as much as possible.
In addition, in the embodiment of the application, the electronic device performs local real-time conversion on the text content to be read, so that the voice content is obtained, and the offline conversion work by a manufacturer is not required in advance.
Optionally, at step 101: before the target information in the playlist is acquired, the content playing method further includes: adding the target information to the playlist.
In the embodiment of the application, for target content including text content, which a user wants to know information in an audio manner, the user may add the target information to the playlist in advance.
As an implementation manner, an auxiliary reading function may be added in the electronic device, and after the function is triggered, the playlist is displayed, so that the user adds the target information to the playlist.
And setting a corresponding function control for the auxiliary reading function, and calling and displaying the playlist by triggering the function control. The functional control can be a virtual control, an entity control, or a combined control of the virtual control and the entity control, and the specific situation can be set according to actual requirements. When the function control is a virtual control, the function control can be set in a shortcut center interface of the electronic device or an Artificial Intelligence (AI) assistant interface. As shown in fig. 2, which illustrates an AI assistant interface of an electronic device, the functionality control 201 is displayed at the bottom of the AI assistant interface. After the user triggers the functionality control, a playlist interface as shown in fig. 3 may be displayed. In the playlist interface, an addition control 301 for adding target information is displayed. By triggering the add control 301, the addition of the target information can be performed.
In order to better understand the adding process of the target information, two embodiments are taken as examples and further explained below.
Example one
The target information is URL information corresponding to the target content.
The document or other data on the world wide web can be addressed through the URL information, and therefore, when the target content is data on the world wide web, in this embodiment of the application, before adding the target information to the playlist, the content playing method may further include: acquiring uniform resource locator information (i.e., URL information) of the target content, and then adding the target information to the playlist may include: and adding the acquired uniform resource locator information into the playlist.
The URL information can be acquired in a copying and pasting mode; after the user opens the display interface of the target content, the electronic device automatically detects whether the target content has the corresponding URL information, and in the case that the target content is determined to have the corresponding URL information, the URL information is acquired and added to the playlist. The following takes the URL information obtained by copying and pasting as an example, and the process of obtaining the URL information is described as an example.
For example, after the user triggers add control 301 in fig. 3, an information add interface is displayed, as shown in fig. 4. The information adding interface displays a control of "copying an ULR link", and after a user triggers the control, the display interface is switched to a desktop interface of the electronic device shown in fig. 5 from the information adding interface, so that the user searches for target content from the desktop (assuming that the target content is an article a). At this time, a floating control 501 may be displayed in the display interface of the electronic device, and the user may perform a confirmation completion operation or a cancel operation of copying through the floating control 501. After finding the article a, the user can copy the URL information corresponding to the article a. After the copy is completed, the floating control 501 may provide options of "confirm copy is completed" and "cancel copy" to the user, and the user may select the option according to the requirement. Assuming that the user triggers the option "confirm copy complete" provided by the hover control 501, the display interface will automatically return to the information add interface. In the information adding interface at this time, the original "copy ULR link" control is changed to the "paste ULR link" control, as shown in fig. 6. After the user triggers the "paste ULR link" control, the pasting operation of the ULR link is completed, that is, the adding operation of the target information is completed, the display interface is automatically switched from the information adding interface to the playlist interface, and the adding result is displayed, as shown in fig. 7. Then, when the user wants to obtain the information in the article a in an audio manner through the playlist, the electronic device may open the article at the background through the ULR information corresponding to the article a in the playlist, and convert the article content in the article a into voice for playing.
Example two
The target information is target operation information corresponding to the target content.
In this embodiment of the application, before adding the target information to the playlist, the content playing method may further include: and determining target operation information of the user for acquiring the target content. Then, the step of adding the target information to the playlist may include: and adding target operation information of the target content acquired by the user into the playlist.
The target operation information can be obtained in the process of acquiring the target content by the user. The target operation information may include: the target operation path information is, for example, what application is started when the user acquires the target content, what interfaces are opened after the application is started, what controls are triggered in the opened interfaces, and the like, so as to record the operation path information for acquiring the target content.
By recording the target operation information of the target content acquired by the user and acquiring the target content by the target operation information, the application range is wider, and the target content can be acquired when the target content does not have the ULR information.
Optionally, in the process of obtaining the target content by the user, the electronic device may record target operation information of obtaining the target content by the user, record a screen of the operation process of obtaining the target content by the user to obtain a dynamic image, and display the dynamic image after finishing recording the screen, so that the user can determine whether there is a wrong operation or not in the obtaining process, thereby determining whether there is a wrong recorded target operation information. The dynamic image may be a screen recording video, or a dynamic picture obtained from the screen recording video.
Optionally, for the case that the target operation information is determined to be correct according to the dynamic image, an implementation manner is provided in an embodiment of the present application, and is as follows:
in this embodiment of the application, the step of determining that the user obtains the target operation information of the target content may include:
under the condition of receiving first input of a user on a first control in a target interface, responding to the first input, recording first operation information of the user for acquiring target content, and displaying a first dynamic image; in a case where a second input by the user is received, the first operation information is determined as the target operation information in response to the second input.
The first dynamic image may be an operation process image when the user acquires the target content, the first operation information may be operation information recorded when the user acquires the target content, the first dynamic image corresponds to the first operation information, and whether the first operation information is recorded incorrectly may be determined by the first dynamic image. The first dynamic image may be a dynamic image obtained by a screen recording manner, such as a screen recording video, or a dynamic image obtained according to a screen recording video image.
The target interface may be an information adding interface shown in fig. 4, and the first control may be an "intelligent recognition user operation track" control shown in fig. 4.
When the user determines that the operation process of obtaining the target content is correct by checking the first dynamic image, the obtained first operation information is correct, and the first operation information corresponding to the first dynamic image can be used as the target operation information and added to the playlist through the second input operation.
Optionally, for a case that the target operation information is determined to be incorrect according to the dynamic image, an implementation manner is provided in an embodiment of the present application, and is as follows:
in this embodiment of the application, the step of determining that the user obtains the target operation information of the target content may include:
under the condition of receiving first input of a user on a first control in a target interface, responding to the first input, recording first operation information of the user for acquiring target content, and displaying a first dynamic image; under the condition that a third input of the user is received, in response to the third input, re-recording second operation information of the user for acquiring the target content, and displaying a second dynamic image; in a case where a second input by the user is received, the second operation information is determined as the target operation information in response to the second input.
The first dynamic image is an operation process image when a user acquires target content, the first operation information is operation information recorded when the user acquires the target content, the first dynamic image corresponds to the first operation information, and whether the first operation information is recorded wrongly or not can be judged through the first dynamic image. The first dynamic image may be a dynamic image obtained by a screen recording manner, such as a screen recording video, or a dynamic image obtained according to a screen recording video image.
The second dynamic image may be an operation process image acquired in a screen recording manner when the user acquires the target content again. The second operation information described here is operation information recorded when the user newly acquires the target content. The second moving image corresponds to the second operation information, and whether the second operation information is recorded incorrectly can be determined through the second moving image. The second dynamic image may be a dynamic image obtained by a screen recording mode, such as a screen recording video, or a dynamic image obtained according to a screen recording video image.
The target interface may be an information adding interface shown in fig. 4, and the first control may be an "intelligent recognition user operation track" control shown in fig. 4.
When the user determines that the operation process of obtaining the target content has the misoperation by checking the first dynamic image, the obtained first operation information is correct, the user can perform the operation of obtaining the target content again by executing the third input, and in the operation process, the user obtains the second operation information and obtains the second dynamic image corresponding to the second operation information.
When the user determines that the operation process of obtaining the target content is correct by checking the second dynamic image, the obtained second operation information is correct, and the second operation information corresponding to the second dynamic image can be used as the target operation information through the second input operation and is added to the playlist.
In order to better understand the above-mentioned process of determining whether the operation information has an error based on the moving image, an example is explained below.
For example, after the user triggers add control 301 in fig. 3, an information add interface is displayed, as shown in fig. 4. The information adding interface displays a control of "intelligently identifying a user operation track", and after the user triggers the control, the display interface is switched to a desktop interface of the electronic device (such as a mobile phone) shown in fig. 5 from the information adding interface, so that the user searches for target content from the desktop (assuming that the target content is an article b in the application program a). At this time, a floating control 501 may be displayed in the display interface of the electronic device, and the user may perform a confirmation recognition completion operation or a re-recognition operation through the floating control 501. In addition, after the display interface is switched to the desktop interface, the screen recording function of the electronic equipment is started and screen recording is started.
And finding and triggering an application icon corresponding to the application program A in the desktop interface by the user. After the application program A is started, the article b is searched in the application program A. After article b is found, article b is opened. In the process, the electronic device records the operation information of opening the article b by the user. After the article b is opened, the user may perform a touch operation on the floating control 501, and the floating control 501 may provide options of "confirm recognition is completed" and "re-recognition" to the user, and the user may select the option according to the requirement. Assuming that the user triggers the option "confirm recognition complete" provided by the hover control 501, screen recording is stopped, and the display interface automatically returns to the information adding interface.
In the information adding interface at this time, an image display window 801 may be displayed, as shown in fig. 8. In the image display window 801, a screen-recorded video can be displayed for the user to view. The user can determine whether the opening operation process of the article b displayed by the screen recording video is wrong or not by checking the screen recording video. If the displayed operation process is correct, it is indicated that the operation information recorded by the electronic device that the user opened the article b is correct, the user may trigger a "confirmation" control in the display interface, and after receiving the confirmation instruction, the electronic device adds the recorded operation information to the playlist, controls the display interface to be switched from the information addition interface to the playlist interface, and displays the addition result, as shown in fig. 9. If the user thinks that the operation process is wrong, a correction control in the display interface can be triggered, after the electronic equipment receives a correction instruction, the display interface is controlled to return to a desktop interface, in the process that the user opens the article b again, a screen recording function is started to record the operation process image of the article b opened by the user again, and operation information is recorded. And after the screen recording is finished, displaying the obtained screen recording video for the user to check. After the user determines that the operation process is correct, the user can trigger a confirmation control in the display interface, and after the electronic device receives a confirmation instruction, the electronic device adds the re-recorded operation information to the playlist, controls the display interface to be switched from the information adding interface to the playlist interface, and displays an addition result, as shown in fig. 9.
Optionally, in the foregoing example, the operation process of the user to reopen the article b may be started in a virtual desktop interface, in addition to the operation from the real desktop interface of the electronic device. For example, after the user triggers the "modify" control, the electronic device displays a virtual desktop interface that is the same as, but not the same as, the real desktop interface. And after the user triggers an application icon corresponding to the application program A in the virtual desktop interface, entering a display interface of the application program A, and further finding the article b.
It should be noted that the foregoing is only for better understanding of the technical solutions provided in the embodiments of the present application, and is not intended to specifically limit the technical solutions provided in the embodiments of the present application. For example, the URL information and the operation information may be acquired after the function control 201 of the reading assistance function is triggered. For example, after the function control 201 is triggered, the display interface is switched to the desktop interface of the electronic device, and the floating control 501 is displayed in the display interface, and the screen recording function is started at the same time. And the user performs the opening operation of the target content at the moment, the electronic equipment records the operation information of the user for acquiring the target content, and records the operation process image of the user through the screen recording function. When the target content has the corresponding URL information, the user copies the URL information; and when the target content is not provided with the corresponding URL information, the user only needs to open the target content. The user can confirm that the operation is completed through the floating control 501, and after receiving a corresponding instruction, the electronic device controls the display interface to be switched to the playlist interface. After the user triggers the adding control 301 in the playlist adding interface, the display interface is switched from the playlist interface to the information adding interface. Two options of 'paste URL link' and 'intelligent recognition user operation track' are displayed in the information adding interface, and a user can select according to actual requirements. If the user has copied the URL information before, the user may select the option of "paste URL link", complete the addition of the URL information, and delete the operation procedure image recorded before and the operation information recorded before. If the user does not copy the URL information before, the user may select the option of "intelligently identifying the user operation track", display a video on the recording screen in the image display window 801 for the user to view, and the user may add the recorded operation information to the playlist or update the operation information according to the actual situation. After the URL information or the operation information is added once, the desktop interface can be returned again, and the URL information or the operation information of other contents to be read can be added.
Optionally, after the target information is added, a page turning mode of the target content may be set in the playlist, so that the electronic device may control the target content to turn pages according to a preset page turning mode after acquiring the target content through the target information in the playlist. For example, if the page turning mode of the target content in the application program is left-right page turning, the page turning mode set in the playlist is left-right page turning. When the electronic equipment controls the target content to turn pages according to the page turning mode set by the user, if the electronic equipment detects that the pages cannot be turned continuously, the electronic equipment considers that the end of the target content is reached. Wherein, the page turning mode may include: page up and down, page left and right, etc.
Optionally, the target content further includes: at least one of audio content and video content.
When the target content is acquired according to the target information, whether the acquired target content includes audio content and/or video content may be detected. When the corresponding audio content and/or video content is preset, the corresponding audio content and video content can be acquired and played.
For example, in the prior art, some articles provided by the public have not only text versions but also audio versions. As shown in fig. 10 and 11, 1001 and 1101 indicate text contents of articles, and 1002 and 1102 indicate that corresponding audio contents are also preset in the articles. The preset audio content may be an audio content artificially recorded according to the text content, or an audio content converted from a text content in the text content. In the embodiment of the application, if it is detected that the target content includes the audio content, the audio content may be played.
Optionally, when detecting whether the target content includes audio content and/or video content, the article page may be subjected to image recognition by an image recognition method. Upon recognizing the graphics representing audio and/or video, such as the graphics 10021 representing audio in fig. 10 and the graphics 11021 representing audio in fig. 11, the target content is considered to include audio content and/or video content. When no graphics representing audio and/or video are identified, the audio content and/or video content is deemed to be excluded from the target content. Of course, it can be understood that the determination may be performed in other realizable manners, and may be specifically set according to actual requirements.
Optionally, step 103: converting the text content in the target content into the voice content for playing, which may include: and converting the text content in the target content into voice content with preset tone for playing.
Wherein the preset timbre is selected by a user from at least two timbres.
The timbre refers to the characteristic that different sound frequencies always have distinctive characteristics in terms of waveform, and different object vibrations have different characteristics. Different sounding bodies have different timbres of sounding due to different materials and structures. For example, pianos, violins and people make different sounds, and everyone makes different sounds.
In the embodiment of the application, in order to meet the personalized requirements of the user, voices with different timbres can be provided for the user to select. Alternatively, voices of different timbres may be set according to different human voices. For example, according to the sound of some popular characters (such as stars) liked by people, voice with corresponding tone is set to improve the interest of reading.
Optionally, the playlist according to the embodiment of the present application may further include acquisition information (e.g., operation information or URL information for acquiring the content to be played) corresponding to at least one item of other content to be played besides the target content. The target content and other contents to be played may belong to the same application program or belong to different application programs. The acquisition information of the contents to be played is added to the playlist in advance, so that the centralized management of the contents to be played can be facilitated, the switching operation among the contents to be played is quicker and simpler, and the switching time is saved. In addition, the probability that the user forgets the content which is wanted to be played can be reduced. The manner of adding the information obtained from the other content to be played may be the same as the manner of adding the target content described above.
Each item of other content to be played described herein may include: for example, the content a to be played is contained in other contents to be played, and the content a to be played may be a text content, an audio content, a video content, or a content including at least two of a text content, an audio content, or a video content. Therefore, the types of the contents which can be added into the playlist by the acquisition information can be enriched, and more playing requirements of the user can be met.
Optionally, under the condition that the acquisition information of at least two to-be-played contents is added in the acquisition playlist, when each to-be-played content is acquired, the to-be-played content can be automatically acquired according to a preset sequence, so that the to-be-played content can be automatically switched, a user does not need to manually switch, the use of the user is facilitated, and meanwhile, the listening continuity of the user is facilitated.
The preset sequence may be a sequence of the acquisition information of each item of content to be played in the playlist, a random sequence, a sequence set by the user, or the like.
Optionally, when the playing sequence is played according to the arrangement sequence of the acquired information in the playlist, in order to facilitate a user to adjust the playing sequence according to the user's own needs, the arrangement sequence of the acquired information in the playlist may also be set to be adjustable. For example, a position adjustment control 302 may be provided for each acquired information, as shown in fig. 7 and 9. By dragging the position adjustment control 302, the arrangement order of the acquired information is adjusted. In addition, in order to facilitate the user to delete the acquired information, a deletion control 303 may be further provided for each acquired information, as shown in fig. 7 and 9. Further, in order to facilitate the user to control the playing progress, a playing control, a pause control, a playing progress bar, and the like may be added to the playlist.
Optionally, in order to facilitate the user to know to which content to be played the added acquisition information belongs, information related to the content to be played may be acquired and presented in a playlist. For example, a brief information display box is used to briefly display the name of the application program where the content to be played is located, the name of the content to be played, the segment in the content to be played, and other information.
Optionally, the acquisition information added to the playlist may be, in addition to the URL information and the acquisition track information, keyword information related to the content to be played, such as a keyword that can acquire the target content, such as an application name and an article name.
Finally, in order to better understand the reading aid function provided in the embodiments of the present application, the system framework of the reading aid function is explained.
Fig. 12 illustrates the system framework of the reading aid function, which may include: a user operation recording module 1201, a configuration module 1202, an acquisition module 1203, a text recognition module 1204, a speech synthesis module 1205, and a play module 1206.
The user operation recording module 1201 is used for recording operation information of a user on the electronic device. For example, operation information of the user acquiring the target content, selection operation information of the user for the page turning manner, and the like are recorded. The user operation recording module 1201 may send the recorded user operation information to the configuration module 1202, and the configuration module 1202 generates the configuration information.
The configuration module 1202 may receive the user operation information sent by the user operation recording module 1201, and generate configuration information in the playlist according to the user operation information, such as acquisition information and page turning mode information. Typically, such configuration information may be stored in the form of a configuration table.
The obtaining module 1203 may read the configuration table generated by the configuration module 1202, and obtain the content to be played according to the obtaining information recorded in the configuration table. In addition, the acquisition module can also control the target content to page according to the page turning mode defined on the configuration table.
In the case that the target content includes text content, the text recognition module 1204 may perform page scanning on a page given by the acquisition module 1203 every time a page is turned, and recognize a body of the text content. In addition, the text recognition module 1204 can also recognize the illustration in the text content, and can decide whether to filter the text in the illustration according to the system setting. Natural Language Processing (NLP) technology may be applied to the text recognition module 1204 to help improve recognition accuracy, correct some texts when recognition is wrong, and the like.
The speech synthesis module 1205 may receive the recognition result of the text content by the text recognition module 1204, and generate speech according to the recognition result. More artificial intelligence technologies can be applied to the speech synthesis module 1205 to improve the enjoyment of the synthesized speech and approach the effect achieved by reading the real voice as much as possible.
For the playing module 1206, in case that the target content is a text content, the speech generated by the speech synthesis module 1205 may be received and played; in the case where the target content includes audio content, the audio content acquired by the acquisition module 1203 may be played.
It should be noted that the modules described above may be physical modules or virtual modules, and the specific situations may be set according to actual needs.
In summary, with the content playing method provided in the embodiment of the present application, a user may add, according to a need and a preference of the user, target information (e.g., target operation information or uniform resource locator information corresponding to target content) corresponding to text content to be read in a same application program or different application programs to a playlist in advance. When a user wants to obtain information in the text content to be read in a listening mode, the electronic equipment can obtain the corresponding text content to be read through the target information in the playlist, and then converts characters in the text content to be read into voice content on line for playing, so that instant audio resources are provided for the user, and the user is assisted in reading. By the method, the audio books which can be listened by the user can be enriched, and the audio books can cover the books which the user wants to read as much as possible.
It should be noted that, in the content playing method provided in the embodiment of the present application, the execution main body may be a content playing device, or a control module in the content playing device for executing the content playing method. In the embodiment of the present application, a content playing device is taken as an example to execute a content playing method, and a device of the content playing method provided in the embodiment of the present application is described.
Fig. 13 is a schematic flowchart of a content playing apparatus according to an embodiment of the present application.
As shown in fig. 13, the content playback apparatus includes:
a first obtaining module 1301, configured to obtain target information in the playlist.
Wherein the target information includes: target operation information or uniform resource locator information corresponding to the target content. The target operation information is operation information for acquiring the target content.
A second obtaining module 1302, configured to obtain, according to the target information obtained by the first obtaining module, target content corresponding to the target information.
Wherein the target content comprises textual content.
And the playing module 1303 is configured to convert the text content in the target content into voice content and play the voice content.
Optionally, the target content further includes: at least one of audio content and video content.
Optionally, the content playing apparatus further includes:
and the adding module is used for adding the target information to the playlist.
Optionally, the content playing apparatus further includes:
the third acquisition module is used for determining the target operation information of the target content acquired by the user; or obtaining the uniform resource locator information of the target content.
Wherein the target operation information includes: target operation path information.
The adding module comprises:
adding the target operation information of the target content acquired by the user determined by the third acquisition module to the playlist;
or, the uniform resource locator information of the target content acquired by the third acquiring module is added to the playlist.
Optionally, the third obtaining module includes:
the first processing unit is used for responding to a first input when the first input of a user to a first control in a target interface is received, recording first operation information of the target content acquired by the user, and displaying a first dynamic image; the first dynamic image is an operation process image when a user acquires the target content, and the first dynamic image corresponds to the first operation information;
a second processing unit, configured to determine, in response to a second input by a user, the first operation information as the target operation information, in a case where the second input is received.
Optionally, the third obtaining module includes:
the third processing unit is used for responding to a first input under the condition that the first input of a user to a first control in a target interface is received, recording first operation information of the target content acquired by the user, and displaying a first dynamic image; the first dynamic image is an operation process image when a user acquires the target content, and the first dynamic image corresponds to the first operation information;
the fourth processing unit is used for responding to a third input when the third input of the user is received, re-recording second operation information of the user for acquiring the target content and displaying a second dynamic image; the second dynamic image is an operation process image when the user reacquires the target content, and the second dynamic image corresponds to the second operation information;
a fifth processing unit, configured to, in a case where a second input by a user is received, determine the second operation information as the target operation information in response to the second input.
Optionally, the playing module 1303 includes:
and the playing unit is used for converting the text content in the target content into voice content with preset tone for playing.
Wherein the preset timbre is selected by a user from at least two timbres.
In the embodiment of the application, a user may add, in advance, target information (e.g., target operation information or uniform resource locator information corresponding to target content) corresponding to text content to be read in the same application program or different application programs to a playlist according to own needs and preferences. When a user wants to obtain information in the text content to be read in a listening mode, the electronic equipment can obtain the corresponding text content to be read through the target information in the playlist, and then converts characters in the text content to be read into voice content on line for playing, so that instant audio resources are provided for the user, and the user is assisted in reading. By the method, the audio books which can be listened by the user can be enriched, and the audio books can cover the books which the user wants to read as much as possible.
The content playing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The content playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The content playing device provided in the embodiment of the present application can implement each process implemented by the content playing method embodiment shown in fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 14, an electronic device 1400 is further provided in this embodiment of the present application, and includes a processor 1401, a memory 1402, and a program or an instruction stored in the memory 1402 and executable on the processor 1401, where the program or the instruction is executed by the processor 1401 to implement each process of the foregoing content playing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 15 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1500 includes, but is not limited to: a radio frequency unit 1501, a network module 1502, an audio output unit 1503, an input unit 1504, a sensor 1505, a display unit 1506, a user input unit 1507, an interface unit 1508, a memory 1509, and a processor 1510.
Those skilled in the art will appreciate that the electronic device 1500 may also include a power supply (e.g., a battery) for powering the various components, which may be logically coupled to the processor 1510 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 15 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1510 is configured to obtain target information in a playlist, and obtain target content corresponding to the target information according to the target information; an audio output unit 1503, configured to convert the text content in the target content into a voice content for playing.
Wherein the target information includes: target operation information or uniform resource locator information corresponding to the target content; the target operation information is operation information for acquiring the target content. Wherein the target content comprises textual content.
In the embodiment of the application, a user may add, in advance, target information (e.g., target operation information or uniform resource locator information corresponding to target content) corresponding to text content to be read in the same application program or different application programs to a playlist according to own needs and preferences. When a user wants to obtain information in the text content to be read in a listening mode, the electronic equipment can obtain the corresponding text content to be read through the target information in the playlist, and then converts characters in the text content to be read into voice content on line for playing, so that instant audio resources are provided for the user, and the user is assisted in reading. By the method, the audio books which can be listened by the user can be enriched, and the audio books can cover the books which the user wants to read as much as possible.
Optionally, the processor 1510 is further configured to add the target information to the playlist.
Optionally, the processor 1510 is further configured to determine that a user acquires target operation information of the target content, or acquires the uniform resource locator information of the target content; and adding the target operation information of the target content acquired by the user to the playlist, or adding the uniform resource locator information of the target content to the playlist.
Wherein the target operation information includes: target operation path information.
Alternatively, in a case where the user input unit 1507 receives a first input of a first control in the target interface from the user, the processor 1510 is further configured to record first operation information of acquiring the target content from the user in response to the first input, and control the display unit 1506 to display a first dynamic image.
In a case where the user input unit 1507 receives a second input by the user, the processor 1510 is further configured to determine the first operation information as the target operation information in response to the second input.
The first dynamic image is an operation process image when the user acquires the target content, and the first dynamic image corresponds to the first operation information.
Alternatively, in a case where the user input unit 1507 receives a first input of a first control in the target interface from the user, the processor 1510 is further configured to record first operation information of acquiring the target content from the user in response to the first input, and control the display unit 1506 to display a first dynamic image.
In a case where the user input unit 1507 receives a third input from the user, the processor 1510 is further configured to re-record second operation information for the user to acquire the target content in response to the third input, and control the display unit 1506 to display a second dynamic image.
In a case where the user input unit 1507 receives a second input by the user, the processor 1510 is further configured to determine the second operation information as the target operation information in response to the second input.
The first dynamic image is an operation process image when the user acquires the target content, and the first dynamic image corresponds to the first operation information.
And the second dynamic image is an operation process image when the user acquires the target content again, and the second dynamic image corresponds to the second operation information.
It should be understood that in the embodiment of the present application, the input Unit 1504 may include a Graphics Processing Unit (GPU) 15041 and a microphone 15042, and the Graphics processor 15041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1506 may include a display panel 15061, and the display panel 15061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1507 includes a touch panel 15071 and other input devices 15072. A touch panel 15071, also referred to as a touch screen. The touch panel 15071 may include two parts of a touch detection device and a touch controller. Other input devices 15072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1509 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 1510 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1510.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above content playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing content playing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A method for playing content, comprising:
determining target operation information of target content acquired by a user; wherein the target operation information includes: target operation path information;
acquiring target information in a playlist;
acquiring target content corresponding to the target information according to the target information; wherein the target content comprises textual content;
converting the text content in the target content into voice content for playing;
wherein the target information includes: target operation information corresponding to the target content; the target operation information is operation information for acquiring the target content;
the determining of the target operation information of the target content acquired by the user includes:
under the condition of receiving first input of a user on a first control in a target interface, responding to the first input, recording first operation information of the user for acquiring the target content, and displaying a first dynamic image; the first dynamic image is an operation process image when a user acquires the target content, and the first dynamic image corresponds to the first operation information;
in a case where a second input by the user is received, the first operation information is determined as the target operation information in response to the second input.
2. The content playing method according to claim 1, wherein the target content further comprises: at least one of audio content and video content.
3. The content playing method according to claim 1, wherein before the obtaining of the target information in the playlist, the content playing method further comprises:
adding the target information to the playlist.
4. The content playing method according to claim 3, wherein the adding the target information to the playlist includes:
and adding the target operation information of the target content acquired by the user to the playlist.
5. The content playing method according to claim 1, wherein in a case where a first input of a first control in a target interface by a user is received, in response to the first input, first operation information of the user for acquiring the target content is recorded, and a first dynamic image is displayed; the first dynamic image is an image of an operation process when a user acquires the target content, and after the step of corresponding the first dynamic image to the first operation information, the method further includes:
under the condition that a third input of the user is received, in response to the third input, re-recording second operation information of the user for acquiring the target content, and displaying a second dynamic image; the second dynamic image is an operation process image when the user reacquires the target content, and the second dynamic image corresponds to the second operation information;
in a case where a second input by the user is received, the second operation information is determined as the target operation information in response to the second input.
6. The content playing method according to claim 1, wherein the converting the text content in the target content into a speech content for playing comprises:
converting the text content in the target content into voice content with a preset tone color for playing;
wherein the preset timbre is selected by a user from at least two timbres.
7. A content playback apparatus, comprising:
the first acquisition module is used for acquiring target information in the playlist;
the second acquisition module is used for acquiring target content corresponding to the target information according to the target information acquired by the first acquisition module; wherein the target content comprises textual content;
the playing module is used for converting the text content in the target content into voice content to be played;
wherein the target information includes: target operation information corresponding to the target content; the target operation information is operation information for acquiring the target content;
the content playback apparatus further includes:
the third acquisition module is used for determining the target operation information of the target content acquired by the user; wherein the target operation information includes: target operation path information;
the third obtaining module includes:
the first processing unit is used for responding to a first input when the first input of a user to a first control in a target interface is received, recording first operation information of the target content acquired by the user, and displaying a first dynamic image; the first dynamic image is an operation process image when a user acquires the target content, and the first dynamic image corresponds to the first operation information;
a second processing unit, configured to determine, in response to a second input by a user, the first operation information as the target operation information, in a case where the second input is received.
8. An electronic device, comprising: processor, memory and program or instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps in the content playing method according to any of claims 1 to 6.
9. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps in the content playing method according to any one of claims 1 to 6.
CN202010497027.1A 2020-06-03 2020-06-03 Content playing method and device, electronic equipment and readable storage medium Active CN111813301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010497027.1A CN111813301B (en) 2020-06-03 2020-06-03 Content playing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010497027.1A CN111813301B (en) 2020-06-03 2020-06-03 Content playing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111813301A CN111813301A (en) 2020-10-23
CN111813301B true CN111813301B (en) 2022-04-15

Family

ID=72847902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010497027.1A Active CN111813301B (en) 2020-06-03 2020-06-03 Content playing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111813301B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010138B (en) * 2021-03-04 2023-04-07 腾讯科技(深圳)有限公司 Article voice playing method, device and equipment and computer readable storage medium
CN113364665B (en) * 2021-05-24 2023-10-24 维沃移动通信有限公司 Information broadcasting method and electronic equipment
CN113641839A (en) * 2021-07-13 2021-11-12 维沃移动通信(杭州)有限公司 Multimedia file searching method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011028636A2 (en) * 2009-09-01 2011-03-10 Seaseer Research And Development Llc Systems and methods for visual messaging
CN103377238A (en) * 2012-04-26 2013-10-30 腾讯科技(深圳)有限公司 Method and browser for processing webpage information
CN105302424A (en) * 2014-05-26 2016-02-03 周莹 Multi-dimensional dynamic mark recording and replaying method and system
CN106339160A (en) * 2016-08-26 2017-01-18 北京小米移动软件有限公司 Browsing interactive processing method and device
CN106341549A (en) * 2016-10-14 2017-01-18 努比亚技术有限公司 Mobile terminal audio reading apparatus and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885826B (en) * 2017-11-07 2020-04-10 Oppo广东移动通信有限公司 Multimedia file playing method and device, storage medium and electronic equipment
CN108563760A (en) * 2018-04-17 2018-09-21 青岛海信电器股份有限公司 media playing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011028636A2 (en) * 2009-09-01 2011-03-10 Seaseer Research And Development Llc Systems and methods for visual messaging
CN103377238A (en) * 2012-04-26 2013-10-30 腾讯科技(深圳)有限公司 Method and browser for processing webpage information
CN105302424A (en) * 2014-05-26 2016-02-03 周莹 Multi-dimensional dynamic mark recording and replaying method and system
CN106339160A (en) * 2016-08-26 2017-01-18 北京小米移动软件有限公司 Browsing interactive processing method and device
CN106341549A (en) * 2016-10-14 2017-01-18 努比亚技术有限公司 Mobile terminal audio reading apparatus and method

Also Published As

Publication number Publication date
CN111813301A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111813301B (en) Content playing method and device, electronic equipment and readable storage medium
US20240107127A1 (en) Video display method and apparatus, video processing method, apparatus, and system, device, and medium
CN110634483A (en) Man-machine interaction method and device, electronic equipment and storage medium
CN106021496A (en) Video search method and video search device
US20120196260A1 (en) Electronic Comic (E-Comic) Metadata Processing
CN105139848B (en) Data transfer device and device
CN111524501A (en) Voice playing method and device, computer equipment and computer readable storage medium
JP2002082684A (en) Presentation system, presentation data generating method and recording medium
CN107643923B (en) Processing method of copy information and mobile terminal
WO2022228377A1 (en) Sound recording method and apparatus, and electronic device and readable storage medium
WO2022177509A1 (en) Lyrics file generation method and device
CN108256071A (en) Generation method, device, terminal and the storage medium of record screen file
CN111156441A (en) Desk lamp, system and method for assisting learning
CN110706679A (en) Audio processing method and electronic equipment
JPH10326176A (en) Voice conversation control method
CN116343771A (en) Music on-demand voice instruction recognition method and device based on knowledge graph
KR101124798B1 (en) Apparatus and method for editing electronic picture book
JP2006189799A (en) Voice inputting method and device for selectable voice pattern
JP6962849B2 (en) Conference support device, conference support control method and program
CN112653919A (en) Subtitle adding method and device
CN111739528A (en) Interaction method and device and earphone
JP2006195900A (en) Multimedia content generation device and method
CN114690992B (en) Prompting method, prompting device and computer storage medium
JP7288491B2 (en) Information processing device and control method
JP7183316B2 (en) Voice recording retrieval method, computer device and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant