CN107230397B - Parent-child audio generation and processing method and device for preschool education - Google Patents

Parent-child audio generation and processing method and device for preschool education Download PDF

Info

Publication number
CN107230397B
CN107230397B CN201710620037.8A CN201710620037A CN107230397B CN 107230397 B CN107230397 B CN 107230397B CN 201710620037 A CN201710620037 A CN 201710620037A CN 107230397 B CN107230397 B CN 107230397B
Authority
CN
China
Prior art keywords
story text
user
audio
story
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710620037.8A
Other languages
Chinese (zh)
Other versions
CN107230397A (en
Inventor
田城宇
廖思琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qiyu Beijing Culture Media Co ltd
Original Assignee
Qiyu Beijing Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiyu Beijing Culture Media Co ltd filed Critical Qiyu Beijing Culture Media Co ltd
Priority to CN201710620037.8A priority Critical patent/CN107230397B/en
Publication of CN107230397A publication Critical patent/CN107230397A/en
Application granted granted Critical
Publication of CN107230397B publication Critical patent/CN107230397B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings

Abstract

The invention discloses a parent-child preschool education audio generation and processing method which comprises the following steps: recommending story text for a user or providing the story text by the user; guiding parents to read the story text in a segmentation mode and recording the story text as an audio clip; and editing the audio clips in a segmented mode to generate a listening audio file. The invention can facilitate parents to select stories, record and modify audio and perform post editing processing, and allows parents not to influence the familiarity interaction of children when the work is busy.

Description

Parent-child audio generation and processing method and device for preschool education
Technical Field
The present invention relates to the field of preschool education, and more particularly, to a method and an apparatus for generating and processing audio for preschool education for parents and children.
Background
Due to busy work and lack of time for accompanying the child, parents choose to use a non-contact mode to interact with the child in a family, such as recording to tell a story for the child, performing preschool education and the like. However, prior art preschool education software does not provide an alternative story, requiring parents to spend time choosing; the function of quickly positioning and matching the recorded text content with the actual position of the audio track is not provided, so that the recording is inconvenient and difficult to modify; meanwhile, the method also has the function of one-key beautifying for specific text recording, and is inconvenient for post-editing processing.
Aiming at the problems that the preschool education software in the prior art does not provide optional stories, is inconvenient to record and modify, is inconvenient for post-editing processing and the like, an effective solution is not available at present.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method and an apparatus for generating and processing parent-child preschool education audio, which can generate and process parent-child preschool education audio for different users or different types of users, facilitate parents to select stories, record and modify audio, and perform post-editing processing, and allow parents not to affect parent-child interactions when the work is busy.
Based on the above object, an aspect of the embodiments of the present invention provides a method for generating and processing parent-child preschool education audio, which is applied to a terminal, and includes the following steps:
recommending story text for a user or providing the story text by the user;
guiding parents to read the story text in a segmentation mode and recording the story text as an audio clip;
and editing the audio clips in a segmented mode to generate a listening audio file.
In some implementations, recommending the story text for the user includes:
collecting user information from a user;
analyzing the user information to generate a label classification of the user;
and searching the story text with the label classification, and recommending the story text to a user.
In some embodiments, the collecting user information from the user refers to testing the user at the terminal to collect user information; searching the story text with the label classification means acquiring a story text set in advance, generating a label classification for each story text in the story text set, and determining to search the story text according to a corresponding relationship between the label classification of the user and the label classification of the story text.
In some implementations, directing the parent to recite the story text in segments and record as audio clips includes:
segmenting the story text;
guiding parents to read the story text, and manually adding a clip identifier after reading one section;
recording the added time point of the clip identification into the recorded audio clip.
In some embodiments, segmenting the story text refers to segmenting the story text in a predetermined manner; directing parents to manually add clip identifications after each reading of a segment refers to manually adding clip identifications in a specified manner.
In some embodiments, each segment of the story text corresponds to a track segment between two of the clip identifications; when the parents record the errors, guiding the parents to delete the audio track segment corresponding to the story text with the errors and recording the audio track segment again; when the parent recording is interrupted, guiding the parent to resume recording from the text of the next story corresponding to the last clip identification.
In some embodiments, when adjusting a soundtrack clip, story text of the corresponding segment is selected; when a story text is selected, reversely marking the corresponding audio track clip; and when an error exists between the audio track segment and the corresponding story text, guiding a parent to manually fine-tune the clip identification and recording the time point of the adjusted clip identification.
In some implementations, the piecewise editing the audio clip includes:
adjusting the volume of each track segment separately;
adding background music to one or more of the track segments;
adding sound effects to one or more of said soundtrack segments;
reducing the noise of each of said soundtrack segments;
and the editing is manual editing by a user or automatic editing by a terminal.
According to another aspect of the embodiment of the invention, the parent-child preschool education audio generation and processing device is further provided, and the method is used.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including a memory, at least one processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to perform the method described above.
The invention has the following beneficial technical effects: according to the parent-child preschool education audio generation and processing method and device provided by the embodiment of the invention, through the technical scheme of recommending the story text, guiding parents to read the story text in a segmentation mode and record the story text into the audio clips, and generating the audio files capable of being listened by editing in a segmentation mode, the parents can conveniently select the story, record and modify the audio and process the post clipping, and the parents are allowed not to influence the parent-child interaction when the work is busy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a first embodiment of a parent-child preschool education audio generation and processing method provided by the present invention;
fig. 2 is a schematic hardware configuration diagram of an embodiment of a computer device for executing the parent-child preschool education audio generation and processing method provided by the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it is understood that "first" and "second" are only used for convenience of description and should not be construed as limiting the embodiments of the present invention, and the descriptions thereof in the following embodiments are omitted.
In view of the above-mentioned objects, a first aspect of the embodiments of the present invention proposes a first embodiment of a parent-child preschool education audio generation and processing method capable of performing parent-child preschool education audio generation and processing for different users or different types of users. Fig. 1 is a flowchart illustrating a first embodiment of a parent-child preschool education audio generating and processing method according to the present invention.
The parent-child preschool education audio generation and processing method is optionally applied to a terminal and comprises the following steps:
step S101, recommending story text for a user or providing story text by the user;
step S103, guiding parents to read story texts in a segmented mode and recording the story texts as audio clips;
step S105, editing the audio clips in a segmented mode, and generating a listening audio file.
Optionally, the terminal recommends the story text to the user, and allows the user to provide the story text meeting the user's own requirements. In embodiments of the invention, the accepting or providing of story text, and the editing of audio clips, may be by parents, or by other users, such as a caregiver, etc. But the user reading the recorded story text must be read by the parent himself. If the children read and record the information by others instead of reading, the children cannot hear the sound of the parents and lose the effect of the familiarity interaction; on the other hand, if others are allowed to read aloud instead, the children can read aloud directly without using the recording scheme of the embodiment of the invention.
In some alternative embodiments, recommending story text for the user includes:
collecting user information from a user;
analyzing the user information to generate a label classification of the user;
and finding the story text with the label classification and recommending the story text to the user.
Wherein optionally the end user (e.g. the parent himself or another user) provides user information for recommending the story. And generating a label classification of the user through a simple question-answer class test, and recommending a story related to the label classification. This makes the recommended story more appropriate and more targeted.
In some alternative embodiments, collecting user information from a user refers to testing the user at the terminal to collect user information; finding a story text with a label classification means obtaining a story text set in advance, generating a label classification for each story text in the story text set, and determining to find the story text according to a correspondence between the label classification of the user and the label classification of the story text.
Wherein optionally the user information may include the age, gender, cultural degree of parents, etc. of the child, based on which the tag classifications, and corresponding story text, may be more accurately located.
In some alternative embodiments, directing the parent to read the story text in segments and record as audio clips includes:
segmenting the story text;
guiding parents to read the story text, and manually adding a clip identifier after reading one section;
the point in time at which the clip identification is added is recorded into the recorded audio clip.
Wherein, optionally, segmentation allows parents to more conveniently read and record audio. Considering that parents may work busy and lack time for recording a large amount of content at one time, segmentation allows parents to use fragmentary time for reading and recording; on the other hand, the segmentation is also convenient for playing and editing in consideration of matching with the text content.
In some alternative embodiments, segmenting the story text refers to segmenting the story text in a predetermined manner; directing parents to manually add clip identifications after each reading of a segment refers to manually adding clip identifications in a specified manner.
Wherein optionally the story segment may be segmented in natural paragraphs or sentences. For special long sentences or short dramas, intra-paragraph or intra-sentence segmentation or inter-paragraph or inter-sentence merging can also be performed. After each paragraph is read by the parent, the parent may click the "next paragraph" button on the screen, or add the clip id in a manner that any other terminal can perceive.
In some alternative embodiments, each story text corresponds to a track segment between two clip identifications; when the parents record the errors, guiding the parents to delete the audio track segment corresponding to the error story text and recording the audio track segment again; when the parent recording is interrupted, the parent is guided to resume recording starting from the next story text corresponding to the last clip identification.
Wherein, optionally, effectively setting the clip identification allows the parent to read and record with fragmented time.
In some alternative embodiments, when adjusting a track clip, story text of the corresponding segment is selected; when a story text is selected, reversely marking the corresponding audio track clip; and when an error exists between the track segment and the corresponding story text, guiding the parents to manually fine-tune the clip identification and recording the adjusted time point of the clip identification.
Optionally, the user can switch between and select the track clips and the story texts according to the corresponding relationship between the track clips and the story texts, so that the use convenience of the user is improved.
In some alternative embodiments, the piecewise editing of the audio clip comprises:
adjusting the volume of each track segment separately;
adding background music to one or more track segments;
adding sound effects to one or more soundtrack segments;
reducing the noise of each track segment;
wherein, the editing is manual editing by a user or automatic editing by a terminal.
Optionally, for an end user (for example, a parent himself or another user) who pursues an editing effect, a manual editing mode may be selected to edit each track segment separately; for terminal users lacking time or editing capability, an automatic editing mode can be selected, and the terminal applies a fixed method to edit the audio clip.
It can be seen from the above embodiments that the parent-child preschool education audio generation and processing method provided by the embodiments of the present invention facilitates parent selection of a story, recording and modification of audio, and post-editing processing by recommending a story text, guiding parents to read the story text in segments and record it as an audio clip, and editing in segments to generate a listened audio file, thereby allowing parents not to influence the parent-child interaction when the work is busy.
The embodiment of the invention also provides a second embodiment of a parent-child preschool education audio generation and processing method which can be used for generating and processing parent-child preschool education audios for different users or different types of users.
The parent-child preschool education audio generation and processing method is optionally applied to a terminal and comprises the following steps:
step S101, recommending story text for the user or providing the story text by the user.
After the user logs in the terminal, the platform immediately establishes a user data form in a background database, analyzes the characteristics of the user by inquiring a series of problems of the user (namely testing the user), preliminarily classifies the label for the user, and provides subsequent services for the user conveniently. The platform recommends suitable content for the user according to the characteristics of the user, such as recommending corresponding stories or messages according to the age and gender of the child of the user. The platform carries out classification grading and labeling processing on the content on the platform according to a certain standard, so that the story has multi-dimensional attributes.
The user may also choose to skip this step and provide the story text directly.
And step S103, guiding the parents to read the story text in a segmented mode and recording the story text as an audio clip.
The user may select to segregate documents by either automatically detecting the segregation or semi-automatically setting the segregation or purely manually segregating. When automatic separation is carried out, the program automatically separates the content of the characters according to the built-in rule. And when the content is semi-automatically separated, assisting the user to separate the content according to the rule of the user. When manually partitioning, the user manually partitions the document content.
An automatic partitioning method is to set a partitioning interval, which may be set to one sentence to the maximum number of sentences that can be accommodated by the document (300 sentences in this embodiment), and the document will be partitioned in a certain manner after partitioning. Separation means include, but are not limited to: the method comprises the steps of separating by using modes such as a backsink symbol and a special character symbol, displaying in lines (namely after the text is separated according to the separation distance, the separated units are displayed on different lines), displaying in pages (namely after the text is separated according to the separation distance, the separated units are displayed on different pages), and scrolling the list.
When a user starts recording, the user is assisted in marking files in an automatic mode, a semi-automatic mode and a manual mode. The content of the multimedia file is marked, a typical mode is to mark the time sequence of the multimedia file, and when the recording and the generation of the file are arranged according to the time sequence, the time point of the file can be marked by the mode, so that the backtracking is convenient. When the recording or generation of the file is not arranged in time sequence, the file can still be marked in a manner corresponding to the content in this way.
After the steps are completed, a mapping related to the file data can be obtained, the mapping is generated according to a certain mapping rule, the corresponding relation between the file separation marks and the file data is reflected, a user can position the marked file data, and the marks formed after the file separation are used for backtracking the corresponding file data positions.
Step S105, editing the audio clips in a segmented mode, and generating a listening audio file.
The sound effect corresponding to the text can be analyzed by the platform. The user can click one key to beautify, and the program automatically configures recommended sound effect and background music, so that the user can quickly obtain an audio file which is subjected to preliminary editing, and meanwhile, the user can also use the editing function to specifically adjust the position corresponding to the sound effect. And finally, outputting the audio file to play for the infant.
It can be seen from the above embodiments that the parent-child preschool education audio generation and processing method provided by the embodiments of the present invention facilitates parent selection of a story, recording and modification of audio, and post-editing processing by recommending a story text, guiding parents to read the story text in segments and record it as an audio clip, and editing in segments to generate a listened audio file, thereby allowing parents not to influence the parent-child interaction when the work is busy.
The parent-child preschool education audio generation and processing method provided by the embodiment of the invention helps a user mark the time position of the multimedia file in the process of making the multimedia file and establishes a connection with corresponding text or other suggestive information, so that the user can quickly find the time position of a multimedia file segment such as a required audio and video and the like in the playback process, the user can conveniently browse, compare, replace, delete, modify, synthesize and beautify the multimedia segment by one key, the time consumed by the user in editing the multimedia file is greatly saved, and the experience of the user is improved.
It should be particularly noted that, the steps in the parent-child preschool education audio generation and processing method can be mutually intersected, replaced, added and deleted, so that the parent-child preschool education audio generation and processing method using these reasonable permutation and combination transformation also belongs to the protection scope of the present invention, and the protection scope of the present invention should not be limited to the embodiments.
In view of the above-described object, a second aspect of the embodiments of the present invention proposes a first embodiment of an parent-child preschool education audio generation and processing apparatus capable of parent-child preschool education audio generation and processing for different users or different types of users. The parent-child preschool education audio generation and processing device uses the parent-child preschool education audio generation and processing method.
According to the parent-child preschool education audio generation and processing device provided by the embodiment of the invention, parents can conveniently select stories, record and modify audios and perform post editing processing through the technical scheme of recommending story texts, guiding the parents to read the story texts in a segmented mode and recording the story texts as audio clips and editing the audio clips in a segmented mode to generate audible audio files, so that the parents are allowed not to influence the parent-child interaction when the work is busy.
It should be particularly noted that, the above-mentioned embodiment of the parent-child preschool education audio generating and processing device adopts the embodiment of the parent-child preschool education audio generating and processing method to specifically describe the working process of each module, and those skilled in the art can easily think that these modules are applied to other embodiments of the parent-child preschool education audio generating and processing method. Of course, since the steps in the parent-child preschool education audio generating and processing method embodiment can be mutually intersected, replaced, added and deleted, these reasonable permutation and combination transformation for the parent-child preschool education audio generating and processing device should also belong to the protection scope of the present invention, and should not limit the protection scope of the present invention on the embodiment.
In view of the above object, a third aspect of the embodiments of the present invention provides an embodiment of a computer device for executing the parent-child preschool education audio generation and processing method.
The computer device for executing the parent-child preschool education audio generation and processing method comprises a memory, at least one processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to execute any one of the methods.
Fig. 2 is a schematic diagram of a hardware structure of an embodiment of a computer device for executing the parent-child preschool education audio generating and processing method according to the present invention.
Taking the computer device shown in fig. 2 as an example, the computer device includes a processor 201 and a memory 202, and may further include: an input device 203 and an output device 204.
The processor 201, the memory 202, the input device 203 and the output device 204 may be connected by a bus or other means, and fig. 2 illustrates the connection by a bus as an example.
The memory 202, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the parent-child preschool education audio generation and processing method in the embodiments of the present application. The processor 201 executes various functional applications and data processing of the server by running the nonvolatile software programs, instructions and modules stored in the memory 202, that is, the parent-child preschool education audio generation and processing method of the above method embodiment is realized.
The memory 202 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the parent-child preschool education audio generating and processing apparatus, and the like. Further, the memory 202 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 202 may optionally include memory located remotely from processor 201, which may be connected to local modules via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 203 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the parent-child preschool education audio generation and processing device. The output device 204 may include a display device such as a display screen.
The program instructions/modules corresponding to the one or more parent-child preschool education audio generation and processing methods are stored in the memory 202, and when executed by the processor 201, the parent-child preschool education audio generation and processing methods in any of the above-described method embodiments are performed.
Any embodiment of the computer device executing the parent-child preschool education audio generation and processing method can achieve the same or similar effects as any corresponding method embodiment.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. Embodiments of the computer program may achieve the same or similar effects as any of the preceding method embodiments to which it corresponds.
In addition, the apparatuses, devices and the like disclosed in the embodiments of the present invention may be various electronic terminal devices, such as a mobile phone, a Personal Digital Assistant (PDA), a tablet computer (PAD), a smart television and the like, or may be a large terminal device, such as a server and the like, and therefore the scope of protection disclosed in the embodiments of the present invention should not be limited to a specific type of apparatus, device. The client disclosed in the embodiment of the present invention may be applied to any one of the above electronic terminal devices in the form of electronic hardware, computer software, or a combination of both.
Furthermore, the method disclosed according to an embodiment of the present invention may also be implemented as a computer program executed by a CPU, and the computer program may be stored in a computer-readable storage medium. The computer program, when executed by the CPU, performs the above-described functions defined in the method disclosed in the embodiments of the present invention.
Further, the above method steps and system elements may also be implemented using a controller and a computer readable storage medium for storing a computer program for causing the controller to implement the functions of the above steps or elements.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions described herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary designs, the functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk, blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a," "an," "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of an embodiment of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (6)

1. A parent-child preschool education audio generation and processing method is applied to a terminal and comprises the following steps:
recommending story text for a user, or providing the story text by a user, the recommending story text for a user comprising: collecting user information from a user, analyzing the user information to generate a label classification of the user, searching the story text with the label classification and recommending the story text to the user, wherein the collecting of the user information from the user refers to testing the user at a terminal to collect the user information;
guiding parents to read the story text in segments and record the story text as an audio clip, wherein the steps comprise that the story text is segmented, guiding parents to read the story text in segments and manually adding a clip identifier after reading one segment, and recording the time point of adding the clip identifier into the recorded audio clip;
editing the audio clips in segments, generating a hearable audio file, wherein,
each section of the story text corresponds to a track segment between two clip identifications; when the parents record the errors, guiding the parents to delete the audio track segment corresponding to the story text with the errors and recording the audio track segment again; when the recording of the parents is interrupted, guiding the parents to resume the recording from the next story text corresponding to the last clip identification;
selecting the story text of the corresponding segment when adjusting the track clip; when a section of the story text is selected, reversely marking the corresponding track clip; and when an error exists between the audio track segment and the corresponding story text, guiding a parent to manually fine-tune the clip identification and recording the time point of the adjusted clip identification.
2. The method of claim 1, wherein searching for the story text having the label classification refers to pre-fetching a set of story texts, generating a label classification for each story text in the set of story texts, and determining to search for the story text according to a correspondence between the label classification of a user and the label classification of the story text.
3. The method of claim 1, wherein segmenting the story text refers to segmenting the story text in a predetermined manner; directing parents to manually add clip identifications after each reading of a segment refers to manually adding clip identifications in a specified manner.
4. The method of claim 1, wherein the editing the audio clip in segments comprises:
adjusting the volume of each track segment separately;
adding background music to one or more of the track segments;
adding sound effects to one or more of said soundtrack segments;
reducing the noise of each of said soundtrack segments;
and the editing is manual editing by a user or automatic editing by a terminal.
5. Parent-child preschool education audio generation and processing device, characterized in that the method of any one of claims 1-4 is used.
6. A computer device comprising a memory, at least one processor and a computer program stored on the memory and executable on the processor, characterized in that the processor performs the method according to any of claims 1-4 when executing the program.
CN201710620037.8A 2017-07-26 2017-07-26 Parent-child audio generation and processing method and device for preschool education Expired - Fee Related CN107230397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710620037.8A CN107230397B (en) 2017-07-26 2017-07-26 Parent-child audio generation and processing method and device for preschool education

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710620037.8A CN107230397B (en) 2017-07-26 2017-07-26 Parent-child audio generation and processing method and device for preschool education

Publications (2)

Publication Number Publication Date
CN107230397A CN107230397A (en) 2017-10-03
CN107230397B true CN107230397B (en) 2020-12-01

Family

ID=59957276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710620037.8A Expired - Fee Related CN107230397B (en) 2017-07-26 2017-07-26 Parent-child audio generation and processing method and device for preschool education

Country Status (1)

Country Link
CN (1) CN107230397B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815217A (en) * 2015-11-30 2017-06-09 北京云莱坞文化传媒有限公司 Story recommends method and story recommendation apparatus
CN109448455A (en) * 2018-12-20 2019-03-08 广东小天才科技有限公司 A kind of real-time error recites method and private tutor's equipment
CN109819314B (en) * 2019-03-05 2022-07-12 广州酷狗计算机科技有限公司 Audio and video processing method and device, terminal and storage medium
CN112786026A (en) * 2019-12-31 2021-05-11 深圳市木愚科技有限公司 Parent-child story personalized audio generation system and method based on voice migration learning
CN111429880A (en) * 2020-03-04 2020-07-17 苏州驰声信息科技有限公司 Method, system, device and medium for cutting paragraph audio
CN112000308B (en) * 2020-09-10 2023-04-18 成都拟合未来科技有限公司 Double-track audio playing control method, system, terminal and medium
CN112541323A (en) * 2020-12-21 2021-03-23 广州优谷信息技术有限公司 Method and device for processing reading materials
CN113096463A (en) * 2021-03-31 2021-07-09 读书郎教育科技有限公司 Parent accompanying reading device and method
CN116049452A (en) * 2021-10-28 2023-05-02 北京字跳网络技术有限公司 Method, device, electronic equipment, medium and program product for generating multimedia data
CN114915836A (en) * 2022-05-06 2022-08-16 北京字节跳动网络技术有限公司 Method, apparatus, device and storage medium for editing audio

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2187819Y (en) * 1993-07-23 1995-01-18 殷惕生 Recording and playing machine for family children teaching
CN2331043Y (en) * 1998-05-21 1999-07-28 王其勇 Children's book with whole section being able to generate sound
WO2006009343A1 (en) * 2004-07-16 2006-01-26 Dae Geon Technology Co., Ltd. Auto sound book apparatus using cds cell
CN102930866A (en) * 2012-11-05 2013-02-13 广州市神骥营销策划有限公司 Evaluation method for student reading assignment for oral practice
CN105575411A (en) * 2014-11-07 2016-05-11 孤山电子科技(上海)有限公司 Audio processing system and method aiming at children
CN105656887A (en) * 2015-12-30 2016-06-08 百度在线网络技术(北京)有限公司 Artificial intelligence-based voiceprint authentication method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104347096A (en) * 2013-08-09 2015-02-11 上海证大喜马拉雅网络科技有限公司 Recording system and method integrating audio cutting, continuous recording and combination
CN106128467A (en) * 2016-06-06 2016-11-16 北京云知声信息技术有限公司 Method of speech processing and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2187819Y (en) * 1993-07-23 1995-01-18 殷惕生 Recording and playing machine for family children teaching
CN2331043Y (en) * 1998-05-21 1999-07-28 王其勇 Children's book with whole section being able to generate sound
WO2006009343A1 (en) * 2004-07-16 2006-01-26 Dae Geon Technology Co., Ltd. Auto sound book apparatus using cds cell
CN102930866A (en) * 2012-11-05 2013-02-13 广州市神骥营销策划有限公司 Evaluation method for student reading assignment for oral practice
CN105575411A (en) * 2014-11-07 2016-05-11 孤山电子科技(上海)有限公司 Audio processing system and method aiming at children
CN105656887A (en) * 2015-12-30 2016-06-08 百度在线网络技术(北京)有限公司 Artificial intelligence-based voiceprint authentication method and device

Also Published As

Publication number Publication date
CN107230397A (en) 2017-10-03

Similar Documents

Publication Publication Date Title
CN107230397B (en) Parent-child audio generation and processing method and device for preschool education
CN107918653B (en) Intelligent playing method and device based on preference feedback
CN109710841B (en) Comment recommendation method and device
US8168876B2 (en) Method of displaying music information in multimedia playback and related electronic device
Atton Popular music fanzines: Genre, aesthetics, and the “democratic conversation”
CN107517323B (en) Information sharing method and device and storage medium
CN103491450A (en) Setting method of playback fragment of media stream and terminal
KR100905744B1 (en) Method and system for providing conversation dictionary service based on user created dialog data
CN104867494A (en) Naming and classification method and system of sound recording files
US20110113357A1 (en) Manipulating results of a media archive search
KR20100005177A (en) Customized learning system, customized learning method, and learning device
JP2019091416A5 (en)
CN102623034B (en) Method and device for realizing mutual positioning and character fast recording of video data and text data
US10007843B1 (en) Personalized segmentation of media content
CN110619673B (en) Method for generating and playing sound chart, method, system and equipment for processing data
US20210383781A1 (en) Systems and methods for score and screenplay based audio and video editing
US20200302933A1 (en) Generation of audio stories from text-based media
KR20200033605A (en) Method for recommending video lecture clip based on artificial intelligence
KR100882857B1 (en) Method for reproducing contents by using discriminating code
CN114999464A (en) Voice data processing method and device
CN109118156B (en) Book information collaboration system and method
CN112685534A (en) Method and apparatus for generating context information of authored content during authoring process
CN103186583A (en) Mobile terminal-based information recording and retrieval method and device
CN106294634A (en) Information processing method based on user interface and information processor
Navas Regenerative culture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201201

Termination date: 20210726