CN107451185B - Recording method, reading system, computer readable storage medium and computer device - Google Patents

Recording method, reading system, computer readable storage medium and computer device Download PDF

Info

Publication number
CN107451185B
CN107451185B CN201710482595.2A CN201710482595A CN107451185B CN 107451185 B CN107451185 B CN 107451185B CN 201710482595 A CN201710482595 A CN 201710482595A CN 107451185 B CN107451185 B CN 107451185B
Authority
CN
China
Prior art keywords
user
reading
text
reading device
music
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710482595.2A
Other languages
Chinese (zh)
Other versions
CN107451185A (en
Inventor
徐杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Yuanxixing Culture Media Co ltd
Original Assignee
Chongqing Yuanxixing Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Yuanxixing Culture Media Co ltd filed Critical Chongqing Yuanxixing Culture Media Co ltd
Priority to CN201710482595.2A priority Critical patent/CN107451185B/en
Publication of CN107451185A publication Critical patent/CN107451185A/en
Application granted granted Critical
Publication of CN107451185B publication Critical patent/CN107451185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/16Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the invention provides a recording method, a reading system, a computer readable storage medium and a computer device, which are used for solving the technical problems that the reading process through a reading kiosk is complicated and the existing reading kiosk cannot be widely popularized in the prior art. The method is applied to a reading system, the reading system comprises a reading device arranged in a carriage building and a cloud server connected with the reading device through a network, and the method comprises the following steps: the cloud server obtains a login request from the reading device or a user terminal of the user, and verifies whether the user is a legal user or not based on the login request; if yes, the reading device obtains and displays the reading text determined by the user; the reading device collects reading sound emitted by the user based on the reading text to generate a reading audio file.

Description

Recording method, reading system, computer readable storage medium and computer device
Technical Field
The invention relates to the field of interactive entertainment, in particular to a recording method, a reading system, a computer readable storage medium and a computer device.
Background
With the attention of people to the traditional culture and the boredom of the public to the current comprehensive art programs, the culture programs such as 'reader' at the heart and the like are pursued by people, and the interest of people in reading is aroused. People hope to read the fragments of famous fragments, present the sentiment which is most expressed in the heart in a simple way, read the life dream of people by voice which is sent to soul and feel the strength of characters. However, people suffer from the lack of convenient and professional equipment, sites, to read aloud, except for on-line television programs.
Currently, reading kiosks are arranged in individual cities at the center, and each reading kiosk is managed by a special worker. After a person who wants to read aloud enters the reading kiosk, a worker can operate the recording device aside to record the reading voice of the user. And then, the recorded reading audio is screened by the people watching at the center, and the selected reading audio is broadcasted in the program of reader.
However, although the reading kiosk with the cardinal vision provides a reading way for people, the reading by the cardinal vision reading kiosk requires the participation of staff, and the whole process is very cumbersome. And, limited by cost and manpower, reading pavilion of central sight can't promote a large amount.
Disclosure of Invention
The embodiment of the invention provides a recording method, a reading system, a computer readable storage medium and a computer device, which are used for solving the technical problems that the reading process through a reading kiosk is complicated and the existing reading kiosk cannot be widely popularized in the prior art.
In a first aspect, a recording method is provided, which is applied to a reading system, where the reading system includes a reading device disposed in a carriage building, and a cloud server connected to the reading device through a network, and the method includes:
the cloud server obtains a login request from the reading device or a user terminal of the user, and verifies whether the user is a legal user based on the login request;
if yes, the reading device obtains and displays the reading text determined by the user;
the reading device collects reading sound emitted by the user based on the reading text to generate a reading audio file.
In a possible implementation manner, after the cloud server obtains the login request from the reading device or a user terminal of a user, and verifies whether the user is a valid user based on the login request, the method further includes:
and when the user is a legal user, the cloud server executes a first obtaining operation of obtaining money corresponding to the login request from the user designated account.
In one possible implementation, before the reading device obtains and displays the user-determined read text, the method further includes:
the reading device obtains at least one attribute parameter of the user; the at least one attribute parameter is specifically the biological characteristic information of the user and/or the environmental parameter of the user;
the reading device determines attribute characteristics of the user based on the at least one attribute parameter; the attribute features are specifically physiological features of the user and/or environmental features of the environment where the user is located;
and the reading device pushes a text set corresponding to the attribute characteristics according to a preset condition, wherein the text set comprises at least one text, and the text set is used for the user to determine the reading text.
In a possible implementation manner, when the attribute feature is a physiological feature of the user, the reading-alowing device pushes a text set corresponding to the attribute feature according to a preset condition, including:
the reading device determines the crowd category to which the user belongs based on the physiological characteristics;
the reading device pushes a text set corresponding to the crowd category to which the user belongs to the user based on the preset corresponding relation between the crowd category and the text.
In a possible implementation manner, when the attribute feature is a physiological feature of the user, the reading-alowing device pushes a text set corresponding to the attribute feature according to a preset condition, including:
the reading device determines whether the user is a historical user based on the biological characteristics;
if so, the reading device determines the historical text set of the user based on the historical record of the text selected by the user;
and the reading device pushes a text set corresponding to the historical text set to the user.
In a possible implementation manner, when the attribute feature is an environmental feature of an environment where the user is located, the pushing, by the reading device, a text set corresponding to the attribute feature according to a preset condition includes:
the reading device determines the environment category of the environment where the user is located based on the environment characteristics;
the reading device pushes a text set corresponding to the environment category where the user is located to the user based on a preset corresponding relation between the environment category and the text.
In one possible implementation manner, the reading device collects reading sound emitted by the user based on the reading text to generate a reading audio file, and the reading audio file includes:
the reading device obtains and plays the configuration music corresponding to the reading text;
the reading device collects reading sound emitted by a user based on the reading text and configuration music sound corresponding to the configuration music;
the speaking device generates a speaking audio file comprising the speaking sound and the configuration music sound.
In one possible implementation manner, before the reading device obtains and plays the configuration music corresponding to the reading text, the method further includes:
the reading device obtains at least one user parameter related to the user and/or at least one text parameter related to the read text;
the reading device determines the configuration music corresponding to the reading text based on the at least one user parameter and/or the at least one text parameter.
In one possible implementation manner, the reading device determines, based on the at least one user parameter and/or the at least one text parameter, configuration music corresponding to the read text, including:
the reading device determines a first music type based on the at least one user parameter and/or the at least one text parameter;
the reading device determines configuration music corresponding to the reading text from at least one music object in the first music type.
In one possible implementation, the determining, by the reading device, a first music type based on the at least one user parameter and/or the at least one text parameter includes:
the reading device determines the type of the user to be a first user type based on the at least one user parameter; determining a first music type corresponding to the first user type based on a first corresponding relation between the user type and the music type; or
The reading device determines the reading text to be a first text type based on the at least one text parameter; determining a first music type corresponding to the first text type based on a second corresponding relation between the text type and the music type; or
The reading device determines the type of the user to be a first user type based on the at least one user parameter and determines the read text to be a first text type based on the at least one text parameter; and determining a first music type corresponding to the first text type and the first user type based on a third corresponding relation among the user types, the text types and the music types.
In one possible implementation, after the reading device collects the reading sound emitted by the user based on the reading text to generate a reading audio file, the method further includes:
the reading device obtains an obtaining request used by the user to obtain the reading audio file;
the cloud server executes a second obtaining operation of obtaining money corresponding to the obtaining request from the user-specified account based on the obtaining request;
and after the money is successfully obtained, the reading device transmits the reading audio file to a user-specified device.
In one possible implementation, after the reading device collects the reading sound emitted by the user based on the reading text to generate a reading audio file, the method further includes:
the reading device saves the reading audio file in a storage unit in the reading device; and/or
And the reading device transmits the reading audio file to the cloud server for storage.
In one possible implementation, after the reading device collects the reading sound emitted by the user based on the reading text to generate a reading audio file, the method further includes:
the reading device or the cloud server obtains at least one other reading audio file of at least one other user corresponding to the reading text;
and the reading device or the cloud server evaluates the reading audio file based on the at least one other reading audio file to obtain an evaluation result for representing the reading quality of the user.
In a second aspect, a reading system is provided, where the reading system includes a reading device disposed in a carriage building, and a cloud server connected to the reading device through a network: wherein,
the cloud server is used for obtaining a login request from the reading device or a user terminal of the user and verifying whether the user is a legal user or not based on the login request;
the reading device is used for obtaining and displaying the reading text determined by the user when the cloud server verifies that the user is a legal user; and collecting the reading sound emitted by the user based on the reading text to generate a reading audio file.
In one possible implementation, the cloud server is further configured to:
and when the user is a legal user, executing a first obtaining operation of obtaining money corresponding to the login request from the user designated account.
In one possible implementation, the reading apparatus is further configured to:
before obtaining and displaying the reading text determined by the user, obtaining at least one attribute parameter of the user; the at least one attribute parameter is specifically the biological characteristic information of the user and/or the environmental parameter of the user;
determining attribute characteristics of the user based on the at least one attribute parameter; the attribute features are specifically physiological features of the user and/or environmental features of the environment where the user is located;
and pushing a text set corresponding to the attribute characteristics according to a preset condition, wherein the text set comprises at least one text, and the text set is used for the user to determine the reading text.
In one possible implementation, the reading apparatus is configured to:
when the attribute feature is a physiological feature of the user, determining a crowd category to which the user belongs based on the physiological feature;
based on the preset corresponding relation between the crowd categories and the texts, pushing text sets corresponding to the crowd categories to which the users belong to the users.
In one possible implementation, the reading apparatus is configured to:
when the attribute feature is a physiological feature of the user, determining whether the user is a historical user based on the biological feature;
if so, determining a historical text set of the user based on the historical record of the text selected by the user;
and pushing a text set corresponding to the historical text set to the user.
In one possible implementation, the reading apparatus is configured to:
when the attribute feature is an environmental feature of the environment where the user is located, determining an environmental category of the environment where the user is located based on the environmental feature;
based on the preset corresponding relation between the environment category and the text, pushing a text set corresponding to the environment category where the user is located to the user.
In one possible implementation, the reading apparatus is configured to:
obtaining and playing configuration music corresponding to the reading text;
collecting reading sound emitted by a user based on the reading text and configuration music sound corresponding to the configuration music;
generating a speakable audio file including the speakable sound and the configuration music sound.
In one possible implementation, the reading apparatus is further configured to:
before configuration music corresponding to the reading text is obtained and played, obtaining at least one user parameter related to the user and/or at least one text parameter related to the reading text;
and determining the configuration music corresponding to the reading text based on the at least one user parameter and/or the at least one text parameter.
In one possible implementation, the reading apparatus is configured to:
determining a first music type based on the at least one user parameter and/or the at least one text parameter;
and determining the configuration music corresponding to the speakable text from at least one music object in the first music type.
In one possible implementation, the reading apparatus is configured to:
determining the type of the user as a first user type based on the at least one user parameter; determining a first music type corresponding to the first user type based on a first corresponding relation between the user type and the music type; or
Determining the speakable text to be of a first text type based on the at least one text parameter; determining a first music type corresponding to the first text type based on a second corresponding relation between the text type and the music type; or
Determining the type of the user as a first user type based on the at least one user parameter and the speakable text as a first text type based on the at least one text parameter; and determining a first music type corresponding to the first text type and the first user type based on a third corresponding relation among the user types, the text types and the music types.
In one possible implementation, the reading apparatus is further configured to:
after collecting the reading sound emitted by the user based on the reading text to generate a reading audio file, obtaining a obtaining request of the user for obtaining the reading audio file;
the cloud server is further used for executing a second obtaining operation of obtaining money corresponding to the obtaining request from the user-specified account based on the obtaining request;
the reading device is also used for transmitting the reading audio file to user-specified equipment after the money is successfully obtained.
In one possible implementation, the reading apparatus is further configured to:
after collecting the reading sound emitted by the user based on the reading text to generate a reading audio file, saving the reading audio file in a storage unit of the reading device; and/or
And transmitting the reading audio file to the cloud server for storage.
In one possible implementation manner, the reading device or the cloud server is further configured to:
after collecting the reading sound emitted by the user based on the reading text to generate a reading audio file, obtaining at least one other reading audio file of at least one other user corresponding to the reading text;
and evaluating the reading audio file based on the at least one other reading audio file to obtain an evaluation result for representing the reading quality of the user.
In a third aspect, a computer-readable storage medium is provided, which stores instructions that, when loaded and executed by a processor, implement the sound recording method as described in the first aspect.
In a fourth aspect, a computer device is provided, the computer device comprising:
a processor; and
and the storage device is used for storing instructions and is connected with the processor, and the instructions are loaded by the processor and executed to realize the sound recording method in the first aspect.
In the embodiment of the invention, the reading system comprises the reading device and the cloud server, the reading device arranged in the van body building can interact with a user, and the cloud server connected with the reading device through a network can provide service support for the reading device. Through interaction between the user and the reading system, the reading system can automatically complete operations such as login, verification, voice collection and the like, collect reading voice of the user reading text, and generate a reading audio file. The whole recording process does not need the participation of workers, and is very convenient and easy to operate.
Furthermore, the reading system can automatically verify the user, and the subsequent steps are executed when the user is a legal user, so that the improper use of a malicious user can be avoided, and the whole reading system is safer and more reliable.
Furthermore, the reading device can acquire and display the reading text determined by the user, the user can see the reading text in real time without reading materials such as books and the like, and convenience is brought to the user for reading. Meanwhile, the displayed reading text is determined by the user, so that the user can select the text which is desired to be read according to the preference of the user.
Furthermore, the reading device is arranged in the carriage building, the carriage building is very convenient to produce, disassemble and move, and the cost is low, so that the reading system is easy to popularize.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating a position relationship of a reading apparatus according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a connection relationship of the reading system according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a recording method according to an embodiment of the present invention;
FIG. 4 is a block diagram of a reading system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The embodiments and features of the embodiments of the present invention may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
In the embodiment of the invention, the reading system comprises a reading device (shown in fig. 1) arranged in a carriage building and a cloud server connected with the reading device through a network. The cloud server may be a personal computer, a blade server, or other devices, or may also be a virtual host provided by a cloud computing service provider, or the like; the reading device may include a processor, a microphone, a display and the like connected to each other, for example, the reading device may be composed of a host computer, a professional microphone and a display which are separately installed and connected to each other, so as to provide a better user experience, and of course, in order to save space and cost, the reading device may also be an integrated device such as a tablet computer, and the like.
Referring to fig. 2, fig. 2 is a schematic connection diagram of a reading system according to an embodiment of the present invention. In a specific implementation process, the cloud server and the reading device may be connected through a wired network or a Wireless network, for example, the reading device in the embodiment of the present invention may include a Wireless communication module, and the reading device may be connected to the internet through Wi-Fi (Wireless-Fidelity), and further perform network connection with the cloud server, so as to implement data interaction between the cloud server and the reading device.
In the embodiment of the invention, one cloud server can only correspond to one reading device, namely, one cloud server specially provides service for the corresponding reading device under the condition; of course, in the embodiment of the present invention, one cloud server may also correspond to multiple reading devices, that is, in this case, one cloud server may provide services for the multiple corresponding reading devices at the same time.
In the embodiment of the present invention, the cloud server and the reading device have corresponding functions and actions, but it should be understood that, in a specific implementation process, the functions and actions corresponding to the cloud server and the reading device may be adjusted according to actual needs. That is to say, in the embodiment of the present invention, according to actual needs, the reading device may execute the steps and the instructions originally corresponding to the cloud server, and the cloud server may also execute the steps and the instructions originally corresponding to the reading device.
In a specific implementation, the compartment structure may be a closed or semi-closed structure made of glass, metal or the like, and may have an appearance like a closed telephone booth, for example.
Referring to fig. 3, a recording method according to an embodiment of the invention is described as follows.
Step 101: the cloud server obtains a login request from the reading device or the user terminal of the user, and verifies whether the user is a legal user or not based on the login request.
The login request may include login information provided by the user to verify whether the user is a valid user. Identification information for identifying the reading-out devices can be further included in the login request, for example, the identification information can be the numbers of the reading-out devices, and the cloud server can determine which reading-out device the user wants to use specifically through the identification information.
In the embodiment of the present invention, according to actual requirements, in a situation where a user must log in a reading account for using the reading system to read aloud only by using the reading system, that is, when the user uses the reading system, the user needs to log in his/her reading account first, in this situation, the login information in the login request may be verification information for verifying the reading account, for example, account password information, voiceprint information, fingerprint information, retina information, facial feature information, and the like, which may be used for logging in the reading account for the user.
In another situation, the user can also be set to use the reading system for reading without logging in the reading account when using the reading system for reading, and in this situation, the user can use the reading system without registering a special reading account.
In a specific implementation process, when a user needs to use the reading system, a login request may be sent to the cloud server in multiple ways, and the following description lists two ways among them:
in the first mode, the user may send a login request through the reading device, and correspondingly, the cloud server may obtain the login request of the user from the reading device.
In a specific implementation process, the reading device may include an input device, for example, the input device may be a keyboard, a mouse, a touch screen, or the like, and the user may input login information of the user, for example, inputting account password information, through the keyboard, the mouse, the touch screen, or the like provided by the reading device; the input device may also be a camera, which may capture an image of the user to obtain retinal information, facial feature information, etc. of the user; the input device can also be a microphone, and the voice of the user is picked up by the microphone, so that the voiceprint information of the user can be obtained; the input device can also be a fingerprint recognizer, and fingerprint information of a user can be obtained through the fingerprint recognizer; in addition, the reading device can also comprise a two-dimensional code \ bar code scanner, and the two-dimensional code \ bar code provided by the user can be scanned by the two-dimensional code \ bar code scanner to obtain the login information of the user.
Secondly, the user can send a login request through the user terminal, and correspondingly, the cloud server can obtain the login request of the user from the user terminal of the user.
The user terminal of the user may be a mobile phone, a tablet computer, a wearable device, or the like, and may be in network connection with the cloud server to transmit data.
In a specific implementation process, a user can send a login request to a cloud server in a manner that a user terminal scans a two-dimensional code corresponding to a reading device, specifically, the two-dimensional code may be a two-dimensional code displayed by a display of the reading device, or a two-dimensional code posted on a carriage building, or the like; the user may also enter a web page associated with the cloud server through the user terminal, send a login request to the cloud server based on the web page, and so on. After a user sends a login request to the cloud server through the user terminal, the cloud server can obtain the login request of the user.
After the cloud server obtains the login request from the reading device or the user terminal of the user, whether the user is a legal user can be verified based on the obtained login request.
In a specific implementation process, taking a case that a user must log in a reading account when using the reading system to perform reading, in this case, a login verification information base may be stored in the cloud server, where the login verification information base includes login verification information entered when the user registers the reading account. The cloud server obtains the login request and obtains login information of the user from the login request, the login information can be inquired and verified through the login verification information base, when the login information base has login verification information matched with the login information, the user is determined to be a legal user, and when the login information base does not have the login verification information matched with the login information, the user is determined to be an illegal user.
In addition, when verifying whether the user is a valid user based on the login request, the identification information in the login request may be further included. Specifically, after the cloud server obtains the login request and obtains the identification information for identifying the reading device therefrom, it may be determined whether the reading device identified by the identification information belongs to the reading device corresponding to the cloud server. Moreover, when the reading device identified by the identification information does not belong to the reading device corresponding to the cloud server, it can be determined that the user is an illegal user, for example, the illegal user forges the reading device to obtain the cloud server service, and for example, the illegal user forges the identification information to log in.
In one possible implementation, after the cloud server obtains the login request from the reading device or the user terminal of the user and verifies whether the user is a legal user based on the login request, the cloud server may further perform a first obtaining operation of obtaining money corresponding to the login request from the user-specified account when the user is determined to be the legal user.
The user-specified account may be a payment account associated with the reading account of the user, or a payment account specified in a login request sent by the user, or the like. When the user-specified account is a payment account specified in the login request sent by the user, the login request may further include an account number and a password of the user-specified payment account.
In a specific implementation process, when obtaining the money corresponding to the login request from the account specified by the user, the cloud server may obtain the money with a fixed amount, or obtain the money with an amount corresponding to the information content in the login request. Specifically, the login request may include a duration that the user expects to use the reading system, and the cloud server may obtain a money corresponding to the duration from the specified account; the login request also can comprise a text space which is expected to be read by the user through the reading system, and the cloud server can obtain money corresponding to the text space from the specified account; the cloud server can also determine money corresponding to the grade of the reading device according to the identification information in the login request, and the like.
In the embodiment of the present invention, after the cloud server performs the first obtaining operation, it may be further determined whether to successfully obtain the money, so as to determine whether to continue to perform subsequent steps according to the login request. For example, the reading system may continue to perform subsequent recording steps upon determining that the money was successfully received and stop performing subsequent recording steps upon determining that the money was not received.
Step 102: when yes, the reading device obtains and displays the reading text determined by the user.
In the embodiment of the invention, when the cloud server verifies that the user is a legal user based on the login request, the reading device can obtain the reading text and display the reading text through the display.
In a specific implementation process, the reading-aloud text obtained by the reading-aloud device can be stored in a local storage device of the reading-aloud device, and when the reading-aloud text needs to be displayed, the reading-aloud device can directly read the reading-aloud text from the local storage device; the reading text obtained by the reading device can also be stored in a cloud server or other cloud storage equipment, and when the reading text needs to be displayed, the reading device can instantly obtain the reading text through a network; the reading text obtained by the reading device can also be from the user terminal; the speakable text obtained by the speakable device may also be entered into the speakable device by the user via an input device of the speakable device.
The method of storing the reading text in the local storage device can avoid the situation that the reading text cannot be obtained due to network congestion, reduce the burden of a cloud server and improve the reading experience of a user; the method of storing the reading text in the cloud server or other cloud storage devices can save the storage capacity of the reading device, and the reading device can also obtain richer reading texts through a network; the method for obtaining the reading text through the user terminal or inputting the reading text by the user can meet the requirement of the user on more personalized reading text, for example, the user can lead the self-created text into the reading device through storage media such as a U disk.
Step 103: the reading device collects reading sound emitted by the user based on the reading text to generate a reading audio file.
After the speakable device obtains and displays the speakable text at step 102, the user may begin speaking. In the process of reading aloud by the user, the aloud reading device can collect aloud reading sound emitted by the user based on the aloud reading text to generate an aloud reading audio file, and the aloud reading sound is recorded.
In a specific implementation process, the reading-aloud device may start to collect reading-aloud sound synchronously at a time when the reading-aloud text starts to be displayed, the reading-aloud device may start to collect reading-aloud sound after a predetermined time (e.g., 5 seconds) elapses at the time when the reading-aloud text starts to be displayed, the reading-aloud device may start to collect reading-aloud sound synchronously when the sound of the user is detected, the reading-aloud device may also start to collect reading-aloud sound after a start-to-collect instruction of the user is obtained, for example, the reading-aloud device may display a button for indicating that the reading-aloud sound starts to be collected through the display, and the reading-aloud device may start to collect reading-aloud sound after the user clicks the button.
In addition, the reading-alowing device may suspend collecting reading sound after obtaining the user's collection suspending instruction, for example, the reading-alowing device may display a button for indicating that the collection of reading sound is suspended through the display, and the reading-alowing device may suspend collecting reading sound after the user clicks the button.
In the embodiment of the invention, when the reading device generates the reading file, the reading sound can be automatically preprocessed, for example, the reading sound can be subjected to processing such as volume balance and noise reduction, and the reading sound after preprocessing can be clearer and more pleasant.
In a possible embodiment, before the reading device obtains and displays the user-determined reading text, the reading device may further obtain at least one attribute parameter of the user, where the at least one attribute parameter may specifically be biometric information of the user and/or an environmental parameter of the environment where the user is located, the reading device may determine an attribute feature of the user based on the at least one attribute parameter, where the attribute feature specifically is a physiological feature of the user and/or an environmental feature of the environment where the user is located, and further, the reading device may push a text set corresponding to the attribute feature according to a preset condition, where the text set includes at least one text, and the text set is used for the user-determined reading text to be read.
In a specific implementation process, the reading device may obtain the attribute information of the user recorded in the reading account of the user from the cloud server, and the reading device may also obtain the attribute information of the user in an instant acquisition manner. The following list illustrates several possible scenarios among others:
in one case, the reading device may obtain the physiological characteristic information related to the age, sex, character, hobby, and the like registered by the user from the cloud server.
In another scenario, the speakable device may obtain weather information from the network where the speakable device is located.
In another case, the reading device may include an image acquisition unit, and the reading device may obtain at least one attribute information of the user by taking an image, for example, the reading device may obtain an image or a video of the user by taking the image, where the image or the video includes physiological characteristic information of the user, such as the face, the stature, the clothing, the hairstyle, and the like, and may also obtain an environmental image of the user by taking the image, where the environmental image includes environmental parameters of the sky, people, buildings, and the like around the compartment building of the user.
In another case, the reading device may also acquire physiological characteristic information such as the voice of the user speaking through the microphone.
In another case, the reading-alowing device may further include a wearable device, such as a smart bracelet, communicatively connected to the processor of the reading-alowing device, and the reading-alowing device may acquire the physiological characteristic information of the user, such as heartbeat, blood pressure, and body temperature, through the wearable device.
In another case, the reading device may further include an environment monitoring device, such as a wireless intelligent thermometer, communicatively connected to the processor of the reading device, and the reading device may further acquire, through the environment monitoring device, the environment parameters of the environment where the user is located, such as temperature, humidity, wind power, PM2.5 content, noise volume, and the like.
After the reading device obtains the at least one attribute parameter of the user, the attribute feature of the user can be determined based on the at least one attribute parameter, and according to the type of the attribute parameter, the determined attribute feature can be a physiological feature of the user or an environmental feature of an environment where the user is located.
For example, physiological characteristics such as gender, age group, mood and the like of the user can be determined by shooting the acquired attribute parameter of the facial image of the user, environmental characteristics such as weather conditions and the like of the place where the compartment building is located can be determined by the environmental parameters such as temperature, wind power and the like acquired by the environmental monitoring equipment, and the like.
After the attribute characteristics of the user are determined by the reading device, the reading device may push a text set corresponding to the attribute characteristics according to a preset condition, where the text set includes at least one text, and the text set is used for the user to determine the reading text for reading.
The preset conditions can include matching corresponding relations between the attribute features and the text sets, after the attribute features are determined, the reading device can search the text sets corresponding to the attribute features according to the matching corresponding relations, and push the searched matched text sets to the user; the preset conditions may also include a process step of determining a corresponding text set according to the attribute characteristics. Also, the preset condition may be that stored in the reading-aloud device at the time of mounting the reading-aloud device; the preset condition may also be stored in the cloud server; the preset condition can also be obtained by the reading device from the cloud server when the text set needs to be determined; the preset condition can also be that the attribute characteristics and the reading text of the user who reads before are counted by the reading device or the cloud server, and the attribute characteristics and the reading text are automatically generated after data analysis.
In a specific implementation process, the reading-alowing device may push the text set corresponding to the attribute feature according to a preset condition in a plurality of ways, and two of the following ways are exemplified as follows:
in the first mode, after the reading device determines the attribute characteristics, the reading device itself determines the text set corresponding to the attribute characteristics according to a preset condition, such as a matching correspondence between the attribute characteristics and the text set, and then pushes the text set to the user. In this way, the reading device can complete the pushing of the text collection without connecting to the network.
In the second mode, after the attribute characteristics are determined by the reading device, the reading device sends the attribute characteristics to the cloud server, the cloud server determines a corresponding text set according to the attribute characteristics, and then sends the determined text set (such as sending text set numbers, text contents, text numbers, text introductions and the like in the text set) to the reading device, and after the reading device receives the text set, the text set determined by the cloud server can be pushed to the user. In this way, because the data of the cloud server is comprehensive, a corresponding accurate text set can be determined according to the attribute characteristics.
In the embodiment of the present invention, the reading device may push the text set corresponding to the attribute feature according to a preset condition, and the text set may be used alone or in parallel, for example, the reading device may preferentially adopt the second method, and when the network is not smooth or the text set fed back by the cloud server is not received within a predetermined time, the reading device further adopts the first method.
In a specific implementation process, the manner of pushing the text collection by the reading device may be to present information of the titles, authors, brief descriptions, etc. of the texts in the text collection to the user through the display, or to play information of the titles, authors, brief descriptions, etc. of the texts in the text collection to the user through the speaker. In the embodiment of the present invention, there is no limitation on how the reading device pushes the text set, and what information of the text in the text set is pushed by the reading device.
In a possible implementation manner, when the attribute feature is a physiological feature of the user, the reading device pushes a text set corresponding to the attribute feature according to a preset condition, and the method may be implemented by the following steps:
the method comprises the following steps: the reading device determines the crowd category to which the user belongs based on the physiological characteristics.
The reading device can determine that the user belongs to one or more crowd categories based on the determined physiological characteristics and preset conditions. For example, the physiological characteristic of the user is "age: 15 years old, sex: male ', the reading device can perform matching according to preset matching conditions, and the user is determined to be in the category of ' teenager male '; the physiological characteristics of the user are "facial expression: and if the user is difficult to pass, the reading device can perform matching according to preset matching conditions, and the user is determined to be the crowd category of the people with low mood.
Step two: and the reading device pushes a text set corresponding to the crowd category to which the user belongs to the user based on the preset corresponding relation between the crowd category and the text.
After the crowd category to which the user belongs is determined, the reading device can determine texts corresponding to the crowd category to which the user belongs based on the preset corresponding relation between the crowd category and the texts, combine the corresponding texts into a text set, and further push the combined text set to the user; of course, the reading device may also directly determine the text set corresponding to the crowd category to which the user belongs based on a preset correspondence between the crowd category and the text, and then push the determined text set to the user.
For example, when the crowd category to which the user belongs is "teenager male", the reading device may determine, according to the correspondence between the crowd category and the text, that the text set corresponding to "teenager male" is "teenager must read the classical prose 50"; when the crowd category to which the user belongs is a person with low mood, the reading device can determine the text set corresponding to the person with low mood as the inspirational article selection according to the corresponding relation between the crowd category and the text; when the crowd category to which the user belongs is 'pupils', the reading device can determine that the text set corresponding to the 'pupils' is 'three hundred poems', and the like according to the corresponding relation between the crowd category and the text.
In a possible implementation manner, when the attribute feature is a physiological feature of the user, the reading-alowing device pushes a text set corresponding to the attribute feature according to a preset condition, and the method can further include the following steps:
the method comprises the following steps: the reading device determines whether the user is a historical user based on the biometric feature.
The reading device can judge whether the user is a user who has read by using the reading device before based on the determined physiological characteristics and preset conditions. For example, taking the determined physiological characteristics including fingerprints as an example, the reading device may collect fingerprints of each historical user performing reading to form a fingerprint library, and when the reading device collects current fingerprint information and determines the fingerprint of the current user, it may query whether the fingerprint of the current user is in the fingerprint library, and if the fingerprint library exists, it may determine that the user is the historical user. In the embodiment of the present invention, the reading-alowing device may further determine whether the user is a historical user according to physiological characteristics of the user, such as a voiceprint, an iris, and a facial image, and the reading-alowing device determines whether the user is a physiological user according to what physiological characteristics, which is not limited in the embodiment of the present invention.
Of course, if the user has already logged in his/her reading account after sending the login request, the reading device may determine whether the user is a historical user directly from the data in the reading account of the user.
Step two: if the user is determined to be a historical user, the reading device may determine the historical text set of the user based on the historical record of the user selected text.
In a specific implementation process, a history of the text selected by the user each time the user reads using the reading system may be stored in the reading device or the cloud server. Therefore, when the user is determined to be a historical user, the reading device can acquire a historical record of the user selection text from a local storage device or a cloud server. And then, determining a historical text set according to the historical records, wherein the texts in the historical text set are the texts read by the user before. For example, the historical text set may include all of the text that the user has previously spoken, or may include only text that has been previously spoken a greater number of times by the user.
Step three: and pushing the text set corresponding to the historical text set to the user by the reading device.
After determining the historical text set from the historical records of the text selected by the user, the reading device may provide the user with the text set corresponding to the historical text. In a specific implementation process, the text set corresponding to the historical text set may be the historical text set itself; or a set consisting of texts which are read more frequently in a historical text set; the text collection can be a collection of other texts similar to or connected with the texts in the history text collection, that is, the reading device can intelligently push the text collection after analyzing the history text collection, for example, if the texts in the history text collection of the user are all the mystery prose, the reading system determines that the user prefers the mystery prose, and then the reading device can push the text collection formed by the mystery prose which is not read by the user.
In a possible implementation manner, when the attribute feature is an environment feature of an environment where the user is located, the reading-alowing device pushes a text set corresponding to the attribute feature according to a preset condition, and the method may be implemented by the following steps:
the method comprises the following steps: the reading device determines the environment type of the environment where the user is located based on the environment characteristics.
The reading device can determine the environment type of the environment where the user is located based on the determined physiological characteristics and preset conditions, wherein the environment type of the environment where the user is located can be one or more. For example, the environment category of the environment in which the user is located may be a category of "rainy day" or a category of "spring day", in which case the environment category of the environment in which the user is located is two.
For example, the environmental characteristics of the environment in which the user is located are "weather: rain ', the reading device can perform matching according to preset matching conditions, and the environment type of the environment where the user is located is determined as the type of ' rainy days '; if the environmental characteristic of the environment where the user is located is "3 months", the reading device may perform matching according to a preset matching condition, and determine the environmental category of the environment where the user is located as the category of "spring".
Step two: and the reading device pushes a text set corresponding to the environment category where the user is located to the user based on the preset corresponding relation between the environment category and the text.
After the environment type of the environment where the user is located is determined, the reading device can determine texts corresponding to the environment type of the environment where the user is located based on a preset corresponding relationship between the environment type and the texts, combine the corresponding texts into a text set, and further push the combined text set to the user; of course, the reading device may also directly determine the text set corresponding to the environment category of the environment where the user is located based on the preset correspondence between the environment category and the text, and then push the determined text set to the user.
For example, when the environment type of the environment where the user is located is "rainy day", the reading device may determine, according to the corresponding relationship between the environment type and the text, that four texts corresponding to "rainy day" are "listen to that cold rain" -in the afterlight "," in burning days and rainstorm "-in the old house", "listen to rain" -in the quarter forest "," rain lane "-in lap", and may combine the four texts into one text set, and then the reading device may push the text set to the user.
In one possible embodiment, the reading device collects reading sound emitted by the user based on the reading text to generate the reading audio file, which may be implemented as follows:
the reading device obtains and plays the configuration music corresponding to the reading text, collects reading sound emitted by the user based on the reading text and the configuration music sound corresponding to the configuration music, and further generates reading audio files comprising the reading sound and the configuration music sound.
In a specific implementation process, the reading-aloud device may read the configuration music corresponding to the reading-aloud text from the local storage device, may also download the configuration music corresponding to the reading-aloud text from the cloud server, or search and download the configuration music corresponding to the reading-aloud text from the internet. For example, when the configuration music corresponding to the reading text is the light music "city of sky", the reading device may read the "city of sky" audio file in the local storage device, may download the "city of sky" audio file from the cloud server, may search and download the "city of sky" audio file from each large music website, and the like.
In addition, in the embodiment of the present invention, the reading device may obtain the configuration music in a predetermined order, for example, the reading device may first search whether the "city of sky" audio file is stored in the local storage device, search the "city of sky" audio file from the cloud server if the "city of sky" audio file is not locally stored, and search the "city of sky" audio file from each of the large music websites if the "city of sky" audio file is not locally stored in the cloud server. Of course, if the reading device cannot obtain the configuration music corresponding to the reading text in any way, the reading device may prompt the user to determine that the other music is the configuration music corresponding to the reading text, or the reading device may automatically determine that the other music is the configuration music corresponding to the reading text.
At the same time or after the reading device obtains the configuration music corresponding to the reading text, the reading device can play the configuration music through the loudspeaker. Specifically, the reading-alowing device may start playing the configuration music after obtaining the complete configuration music, or may play the configuration music while downloading the configuration music in the form of streaming media.
Furthermore, when the user starts reading, the reading device can collect reading sound emitted by the user based on the reading text and configuration music sound corresponding to the configuration music. In a specific implementation process, there may be a plurality of ways for the reading device to collect the reading sound and configure the music sound, and the following two ways are exemplified:
in the first mode, the reading device collects the reading sound of the user through the microphone, and simultaneously collects the configuration music sound played by the reading device through the microphone. For example, the reading device may play the configuration music through the speaker, so that the reading sound of the user and the configuration music sound are integrated, and the reading device may collect the configuration music sound played by the speaker while collecting the reading sound. The reading sound collected in the mode and the reading audio generated by the music sound are more live.
In the second way, the reading device collects the reading sound of the user only through the microphone, and in this way, the reading device can directly use the configuration music audio as the configuration music sound. For example, the reading device may play configuration music audio to the user through the earphone, so that the microphone may not capture the played configuration music sound when capturing the user's sound. The reading audio generated by the reading sound collected by the method and the music sound can better eliminate the interference of noise, and the effect of clearer and more pleasant reading audio is achieved.
After the reading sound and the configuration music sound are collected, the reading device can generate a reading audio file comprising the reading sound and the configuration music sound.
In one possible embodiment, before the reading device obtains and plays the configuration music corresponding to the read-out text, the reading device may further obtain at least one user parameter related to the user and/or at least one text parameter related to the read-out text, and determine the configuration music corresponding to the read-out text based on the at least one user parameter and/or the at least one text parameter.
The user parameter may include the attribute parameter of the user, for example, the user parameter may include physiological characteristic information of the user, and may also include an environmental parameter where the user is located, and the like. For the description of the user parameter, reference may be made to the foregoing description of the attribute parameter of the user, and details are not repeated here.
The text parameters may include parameters such as the names, the numbers of words, the authors, the language types, the genre types, the subject matters, the genres, the times of the year, the introduction, and the expected reading time of the reading text. In a specific implementation process, the reading device may obtain the text parameter from a local storage device of the reading device, may also obtain the text parameter from a cloud server, and may also obtain the text parameter obtained by searching in the internet, for example, obtain the text parameter of the reading text from the bean. In addition, the text parameters of the speakable text may also be included in the text file of the speakable text. The embodiment of the present invention is not limited to how the reading device obtains the text parameter related to the reading text.
After obtaining the at least one user parameter and/or the at least one text parameter, the speaking device may determine the configuration music corresponding to the spoken text based on the at least one user parameter and/or the at least one text parameter. For example, the reading device may determine the configuration music corresponding to the matching of the user parameter and/or the text parameter according to the matching correspondence between the user parameter and/or the text parameter and the configuration music, and the like.
In one possible embodiment, the reading device determines the configuration music corresponding to the reading text based on the at least one user parameter and/or the at least one text parameter by:
firstly, the reading device determines a first music type based on at least one user parameter and/or at least one text parameter, and further, the reading device determines configuration music corresponding to the reading text from at least one music object in the first music type.
Wherein the first music type may comprise at least one music object, each of which belongs to the first music type. That is, in the embodiment of the present invention, it can be understood that the first music genre is a collection including at least one music object belonging to the first music genre. For example, the first music type may be "happy piano music", "light music with antique style", "cartoon acoustic music selection", or the like.
In a specific implementation process, the determining of the first music type based on the at least one user parameter and/or the at least one text parameter may be to match and correspond the at least one user parameter and/or the at least one text parameter with a preset corresponding relationship to determine the first music type.
After determining the first music genre, the speakable device may determine configuration music from the first music genre that corresponds to the speakable text. In particular implementations, the speakable device may determine one or more configuration pieces of music from the first type of music. For example, the reading device may determine the number of configuration music required according to the space of the reading text, for example, only one configuration music may be selected when the reading text is short, multiple configuration music may be selected when the reading text is long, and so on.
In a specific implementation process, the reading-out device may be configuration music automatically determined from the first music genre, or the reading-out device may also be configuration music determined from the first music genre according to a selection of the user, for example, after the reading-out device determines the first music genre, information such as a name and an author of a music object in the first music genre may be displayed on the touch display screen, and the user may select the configuration music from the displayed music objects through the touch display screen.
In a possible embodiment, the reading-out device determines the first music type based on at least one user parameter and/or at least one text parameter, which may include several ways, and the reading-out device may intelligently adopt one of the ways to determine the first music type according to the setting or according to the situation of obtaining the user parameter and the text parameter:
in a first mode, the reading device determines the type of the user as a first user type based on at least one user parameter; and determining a first music type corresponding to the first user type based on the first corresponding relation between the user type and the music type.
In a specific implementation, the reading device may determine the type of the user based on the user parameters, that is, the reading device may classify the user according to the user parameters of the user. For example, the user parameter may include the age of the user, and the reading device may classify the user into "infant," "teenage," "middle age," "elderly," and other types according to the age of the user; the user parameters may also include the gender of the user, and the reading device may classify the user into a type of "male", "female", and the like according to the gender of the user; the user parameters can also comprise facial images of the user, and the reading device can analyze the mood of the user according to the facial images of the user, so that the user can be classified into types such as 'happy', 'sad', 'excited', 'indifference', and the like. In the embodiment of the present invention, when the user parameters are multiple, the reading device may also determine the type of the user according to the combination of the user parameters, for example, the user parameters are "weather: rain, sex: woman ", the reading device may determine the type of user as" women in rainy weather ", and so on. Of course, if the user parameters of the user do not have a completely matching type, the type of the user may be determined as the one closest to the user parameters.
After determining the first user type of the user, the reading device may determine a first music type corresponding to the first user type based on a first correspondence between the user type and the music type. For example, if the first user type is "young women", the first music type corresponding to the "young women" is determined to be "cartoon original sound music selection" through matching according to the first corresponding relationship; if the first user type is 'rainy day', the first music type corresponding to the 'rainy day' can be determined to be 'songs of rainy day' related to the rainy day through matching according to the first corresponding relation, and the like.
In a second mode, the reading device determines the reading text to be a first text type based on at least one text parameter; and determining a first music type corresponding to the first text type based on the second corresponding relation between the text type and the music type.
In a specific implementation, the reading device may determine the type of text based on the text parameter, that is, the reading device may classify the read text according to the text parameter. For example, the text parameter may include a style of the text, and the reading device may classify the text into "memorial", "inspirational", "fun", and the like according to the style of the text; the text parameter may also include the author of the text, such as the text parameter "author: for example, the reading device may classify the text into a "zhuyijing article" type according to the author of the text, and so on. In the embodiment of the present invention, when there are a plurality of text parameters, the reading device may also determine the type of the text according to the combination of the text parameters, for example, the text parameters are "style: belief and type: ancient poems ", the reading device may determine the type of the text as" recall type poems ", and so on. Of course, if the text parameters of the speakable text do not have a completely matching type, the type of the speakable text may be determined to be the closest type to the text parameters.
After determining the first text type of the speakable text, the speakable device may determine a first music type corresponding to the first text type based on a second correspondence between the text type and the music type. For example, if the first text type is "recall type poetry," the matching can be performed through the second corresponding relationship, and the first music type corresponding to the "recall type poetry" is determined to be "light music of the ancient wind feeling); if the first text type is "fun", the matching can be performed through the second correspondence, and the first music type corresponding to "fun" is determined to be "cheerful piano music", and so on.
In a third mode, the reading device determines the type of the user as a first user type based on at least one user parameter and determines the reading text as a first text type based on at least one text parameter; and determining a first music type corresponding to the first text type and the first user type based on the third corresponding relation among the user type, the text type and the music type.
In the embodiment of the invention, the reading device can determine the type of the user based on the user parameter and determine the type of the text based on the text parameter. Moreover, the specific implementation of determining the type of the user may refer to the manner of determining the type of the user in the first manner, and the specific implementation of determining the type of the text may refer to the manner of determining the type of the text in the second manner.
After determining the first user type of the user and the first text type of the spoken text, the speaking device may determine, based on a third correspondence between the user type, the text type, and the music type, the first music type corresponding to the first user type and the first text type, that is, the type of the user and the type of the text may be referred to when determining the first music type in this way.
For example, the first user type is "mood: when the happy character is happy and the first text type is ancient poetry, matching can be carried out through a third corresponding relation, and the' mood: the first music type corresponding to happy "and" ancient poems "is" happy koto music "; the first user type is "season: in spring and when the first text type is "inspiration", matching can be performed through a third corresponding relationship, and a "season: the first music type corresponding to "spring" and "inspirational" is "symphony music about an excitement in spring", and so on.
In one possible implementation manner, after the reading device collects reading sound emitted by the user based on the reading text to generate the reading audio file, the reading device may further obtain an obtaining request for obtaining the reading audio file from the user, the cloud server performs a second obtaining operation of obtaining money corresponding to the obtaining request from the user-specified account based on the obtaining request, and after the money is successfully obtained, the reading device transmits the reading audio file to the user-specified device. That is, after the reading device generates the reading audio file, the user may obtain the reading audio file by paying a certain fee.
Wherein the obtaining request may include a user-specified obtaining manner. For example, the user may specify the obtaining manner to be e-mail sending, file sending, network disk sharing, social network sharing, optical disk burning, and the like. In a specific implementation process, the user may specify one or more obtaining manners, and the reading device may obtain money with different amounts from the user-specified account according to different obtaining manners specified by the user.
The obtaining request may also include the quality of the obtained speakable audio file of the user. The user may specify the quality of the speakable audio file to be achieved, for example, the speakable device may divide the speakable audio file into different quality audio files at a bit rate, typically with the higher bit rate the better the audio quality. In a specific implementation process, the reading device may obtain different money amounts from an account designated by the user according to different qualities of the audio files selected and obtained by the user. For example, when the user selects to obtain a low-quality reading audio file (e.g., 56Kbps), the reading device may obtain a money amount of 0 from the user-specified account, i.e., the user may obtain the low-quality reading audio file from the reading device for free; and when the user chooses to obtain a low-quality reading audio file (such as lossless audio), the reading device may obtain a predetermined amount of money from the user-specified account, and so on. The embodiment of the present invention does not limit what quality of the reading audio file corresponds to what amount of money is obtained.
In a specific implementation process, after obtaining the obtaining request of the user, the reading device may send the obtaining request to the cloud server, and the cloud server may perform a second obtaining operation based on the obtaining request. In the embodiment of the present invention, the specific implementation manner that the cloud server performs the second obtaining operation may refer to the implementation manner that the cloud server performs the first obtaining operation to obtain the money corresponding to the login request, which is not described herein again.
In a specific implementation process, when the reading device transmits the reading audio file to the user-specified device, the reading device may transmit the reading audio file itself or a download link of the reading audio file, and the reading device may further include a disk writer, and the reading device may write the reading audio file onto a disk, and then the user may directly take out the disk storing the reading audio file from the reading device, and so on. The embodiment of the invention does not limit how the reading device transmits the reading audio file to the user-specified equipment.
In one possible implementation, after the reading device collects the reading sound emitted by the user based on the reading text to generate the reading audio file, the reading device may further save the reading audio file in a storage unit in the reading device, and/or transmit the reading audio file to the cloud server for saving.
In the embodiment of the invention, after the reading device generates the reading audio file, the reading audio file can be stored, so that the user can hear the reading audio of the user after reading, for example, the user can immediately hear the reading audio of the user after reading is finished, and if the reading of the user is considered to be not good enough, the user can read and record again, for example, the user can check the history reading record of the user when using the reading system at present, and further can hear the reading audio recorded by using the reading system before according to the reading audio file stored by the reading system.
In the embodiment of the invention, the reading device can also generate the sharing link when storing the reading audio file, and the reading audio of the user can be heard through the sharing link. According to the selection of the user, the reading device can send the sharing link to the user in the modes of mail sending, short message sending, social network sharing and the like.
In one possible implementation manner, after the reading device collects the reading sound emitted by the user based on the reading text to generate the reading audio file, the reading device or the cloud server may further obtain at least one other reading audio file of at least one other user corresponding to the reading text, and evaluate the reading audio file based on the at least one other reading audio file to obtain an evaluation result for characterizing the reading quality of the user.
The other audio files can be the reading audio files of other users specified by the user, for example, the user can select the reading audio files of friends to compare with the reading audio files of the user, and the reading levels among the friends are compared; the reading audio file can also be a reading audio file with higher evaluation determined by the reading system, and the like. Also, the other speakable audio files may be obtained by speaking based on the same text as the user's speakable audio file.
In the embodiment of the present invention, the evaluation of the reading audio file by the reading device or the cloud server may include evaluating whether mandarin in the reading sound of the user is standard or not, evaluating whether the reading sound of the user is interrupted or not accurately, evaluating whether the emotion in the reading sound of the user is full or accurate or not, and the like.
After the evaluation of the reading audio file, an evaluation result for representing the reading quality of the user can be obtained. For example, the evaluation result may be a score, for example, a score of the user reading the audio file is determined based on the score of the other audio file being 100; the evaluation result may also be a comment, and may include, for example, a word that indicates where the reading sound of the user is better displayed, a question that is present in the reading sound of the user and an improvement method, and so on.
In the embodiment of the present invention, the obtained evaluation result may be displayed on a display of the reading device, played through a speaker of the reading device, sent to a device designated by a user, published to the internet, and the like. In addition, in the embodiment of the present invention, when the cloud server or the reading device obtains the evaluation result of the user, the method may further include comparing the evaluation result of the user with the evaluation results of all other users to generate the ranking.
In the embodiment of the invention, the reading system comprises the reading device and the cloud server, the reading device arranged in the van body building can interact with a user, and the cloud server connected with the reading device through a network can provide service support for the reading device. Through interaction between the user and the reading system, the reading system can automatically complete operations such as login, verification, voice collection and the like, collect reading voice of the user reading text, and generate a reading audio file. The whole recording process does not need the participation of workers, and is very convenient and easy to operate.
Furthermore, the reading system can automatically verify the user, and the subsequent steps are executed when the user is a legal user, so that the improper use of a malicious user can be avoided, and the whole reading system is safer and more reliable.
Furthermore, the reading device can acquire and display the reading text determined by the user, the user can see the reading text in real time without reading materials such as books and the like, and convenience is brought to the user for reading. Meanwhile, the displayed reading text is determined by the user, so that the user can select the text which is desired to be read according to the preference of the user.
Furthermore, the reading device is arranged in the carriage building, the carriage building is very convenient to produce, disassemble and move, and the cost is low, so that the reading system is easy to popularize.
Based on the same inventive concept, please refer to fig. 4, an embodiment of the present invention provides a reading system, which may be the reading system in the foregoing method embodiment. The reading system comprises a reading device 202 arranged in a carriage building and a cloud server 201 connected with the reading device 202 through a network, wherein the reading device 202 comprises: wherein,
the cloud server 201 is configured to obtain a login request from the reading device 202 or a user terminal of the user, and verify whether the user is a valid user based on the login request;
the reading device 202 is used for obtaining and displaying the reading text determined by the user when the cloud server 201 verifies that the user is a legal user; and collecting the reading sound emitted by the user based on the reading text to generate the reading audio file.
In a possible implementation, the cloud server 201 is further configured to:
when the user is a legal user, a first obtaining operation of obtaining money corresponding to the login request from the user-specified account is executed.
In one possible implementation, speakable device 202 is further configured to:
before obtaining and displaying the reading text determined by the user, obtaining at least one attribute parameter of the user; the at least one attribute parameter is specifically biological characteristic information of the user and/or an environment parameter where the user is located;
determining attribute characteristics of the user based on the at least one attribute parameter; the attribute features are the physiological features of the user and/or the environmental features of the environment where the user is located;
and pushing a text set corresponding to the attribute characteristics according to a preset condition, wherein the text set comprises at least one text, and the text set is used for determining the reading text by the user.
In one possible implementation, speakable device 202 is configured to:
when the attribute features are physiological features of the user, determining the crowd category to which the user belongs based on the physiological features;
and pushing a text set corresponding to the crowd category to which the user belongs to the user based on the preset corresponding relation between the crowd category and the text.
In one possible implementation, speakable device 202 is configured to:
when the attribute features are physiological features of the user, determining whether the user is a historical user or not based on the biological features;
if so, determining a historical text set of the user based on the historical record of the text selected by the user;
and pushing a text set corresponding to the historical text set to the user.
In one possible implementation, speakable device 202 is configured to:
when the attribute characteristics are the environmental characteristics of the environment where the user is located, determining the environmental category of the environment where the user is located based on the environmental characteristics;
and pushing a text set corresponding to the environment category where the user is located to the user based on the preset corresponding relation between the environment category and the text.
In one possible implementation, speakable device 202 is configured to:
obtaining and playing configuration music corresponding to the reading text;
collecting reading sound emitted by a user based on the reading text and configuration music sound corresponding to the configuration music;
a speakable audio file is generated that includes the speakable sound and the configuration music sound.
In one possible implementation, speakable device 202 is further configured to:
before configuration music corresponding to the reading text is obtained and played, at least one user parameter related to a user and/or at least one text parameter related to the reading text are obtained;
based on the at least one user parameter and/or the at least one text parameter, configuration music corresponding to the speakable text is determined.
In one possible implementation, speakable device 202 is configured to:
determining a first music type based on at least one user parameter and/or at least one text parameter;
configuration music corresponding to the speakable text is determined from at least one music object in the first music genre.
In one possible implementation, speakable device 202 is configured to:
determining a type of a user as a first user type based on at least one user parameter; determining a first music type corresponding to the first user type based on the first corresponding relation between the user type and the music type; or
Determining the read text as a first text type based on at least one text parameter; determining a first music type corresponding to the first text type based on a second corresponding relation between the text type and the music type; or
Determining the type of the user as a first user type based on the at least one user parameter and determining the speakable text as a first text type based on the at least one text parameter; and determining a first music type corresponding to the first text type and the first user type based on the third corresponding relation among the user type, the text type and the music type.
In one possible implementation, speakable device 202 is further configured to:
after the reading sound sent by the user based on the reading text is collected to generate a reading audio file, obtaining an obtaining request for obtaining the reading audio file from the user;
the cloud server 201 is further configured to perform a second obtaining operation of obtaining money corresponding to the obtaining request from the user-specified account based on the obtaining request;
the reading apparatus 202 is further configured to transmit the reading audio file to the user-specified device after successful payment is obtained.
In one possible implementation, speakable device 202 is further configured to:
after collecting the reading sound emitted by the user based on the reading text to generate a reading audio file, saving the reading audio file in a storage unit in the reading device 202; and/or
And transmitting the reading audio file to the cloud server 201 for storage.
In one possible implementation, reading device 202 or cloud server 201 is further configured to:
after the reading sound emitted by the user based on the reading text is collected to generate a reading audio file, at least one other reading audio file of at least one other user corresponding to the reading text is obtained;
and evaluating the reading audio file based on at least one other reading audio file to obtain an evaluation result for representing the reading quality of the user.
Based on the same inventive concept, an embodiment of the present invention further provides a computer-readable storage medium, where instructions are stored, and when the instructions are loaded and executed by a processor, the recording method in the foregoing method embodiment may be implemented.
The technical solution of the embodiment of the present invention substantially or partly contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiment of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
Based on the same inventive concept, referring to fig. 5, an embodiment of the present invention further provides a computer apparatus, which may include a processor 301 and a storage device 302 connected to the processor 301, wherein the processor 301 and the processing device 302 may be connected to the same bus 300. The storage device 302 stores instructions, and when the instructions are loaded and executed by the processor 301, the recording method described in the foregoing method embodiments can be implemented.
The processor 301 may be a CPU (central processing unit) or an ASIC (Application Specific Integrated Circuit), may be one or more Integrated circuits for controlling program execution, may be a baseband chip, and the like.
The number of storage devices 302 may be one or more, and is illustrated in FIG. 5 as one storage device 302. The storage device 302 may be a ROM (Read Only Memory), a RAM (Random Access Memory), a disk Memory, a cloud storage device, or the like, and in addition, the storage device 302 may also be used for storing data and the like.
By programming the processor 301, the code corresponding to any one of the aforementioned methods for determining the cut-in threshold is solidified in the chip, so that the chip can execute any one of the aforementioned methods for determining the cut-in threshold when running, and how to program the processor 301 is a technique known by those skilled in the art, and is not described here again.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (24)

1. A recording method is applied to a reading system, and is characterized in that the reading system comprises a reading device arranged in a carriage building and a cloud server connected with the reading device through a network, and the method comprises the following steps:
the cloud server obtains a login request from the reading device or a user terminal of the user, and verifies whether the user is a legal user based on the login request;
if yes, the reading device obtains and displays the reading text determined by the user;
the cloud server executes a first obtaining operation of obtaining money corresponding to the login request from the user-specified account; the user-specified account comprises a payment account associated with the reading account of the user or a payment account specified in the login request; the money corresponding to the login request comprises money of an amount corresponding to the information content in the login request; the information content comprises at least one of duration of time that the user expects to use the reading system, text space that the user expects to use the reading system for reading and identification information; the identification information is used for identifying the grade of the reading device;
the reading device collects reading sound emitted by the user based on the reading text to generate a reading audio file;
the reading device or the cloud server obtains at least one other reading audio file corresponding to the reading text, and evaluates the reading audio file based on the at least one other reading audio file to obtain an evaluation result; the evaluation result is used for representing the reading quality of the user, and comprises at least one of whether the Mandarin is standard, whether the sentence break is accurate and whether the emotion is full; wherein the at least one other speakable audio file is a user-specified speakable audio file of the other user or a highest rated speakable audio file determined by the speakable system.
2. The method as recited in claim 1, wherein prior to the speakable device obtaining and displaying the user determined speakable text, the method further comprises:
the reading device obtains at least one attribute parameter of the user; the at least one attribute parameter is specifically the biological characteristic information of the user and/or the environmental parameter of the user;
the reading device determines attribute characteristics of the user based on the at least one attribute parameter; the attribute features are specifically physiological features of the user and/or environmental features of the environment where the user is located;
and the reading device pushes a text set corresponding to the attribute characteristics according to a preset condition, wherein the text set comprises at least one text, and the text set is used for the user to determine the reading text.
3. The method as claimed in claim 2, wherein when the attribute feature is a physiological feature of the user, the reading device pushes a text set corresponding to the attribute feature according to a preset condition, including:
the reading device determines the crowd category to which the user belongs based on the physiological characteristics;
the reading device pushes a text set corresponding to the crowd category to which the user belongs to the user based on the preset corresponding relation between the crowd category and the text.
4. The method as claimed in claim 2, wherein when the attribute feature is a physiological feature of the user, the reading device pushes a text set corresponding to the attribute feature according to a preset condition, including:
the reading device determines whether the user is a historical user based on the biological characteristics;
if so, the reading device determines the historical text set of the user based on the historical record of the text selected by the user;
and the reading device pushes a text set corresponding to the historical text set to the user.
5. The method as claimed in claim 2, wherein when the attribute feature is an environmental feature of an environment where the user is located, the reading device pushes a text set corresponding to the attribute feature according to a preset condition, including:
the reading device determines the environment category of the environment where the user is located based on the environment characteristics;
the reading device pushes a text set corresponding to the environment category where the user is located to the user based on a preset corresponding relation between the environment category and the text.
6. The method as recited in claim 1, wherein said speakable device captures speakable sound emitted by said user based on said speakable text to generate a speakable audio file comprising:
the reading device obtains and plays the configuration music corresponding to the reading text;
the reading device collects reading sound emitted by a user based on the reading text and configuration music sound corresponding to the configuration music;
the speaking device generates a speaking audio file comprising the speaking sound and the configuration music sound.
7. The method as recited in claim 6, wherein prior to the speakable device obtaining and playing the configuration music corresponding to the speakable text, the method further comprises:
the reading device obtains at least one user parameter related to the user and/or at least one text parameter related to the read text;
the reading device determines the configuration music corresponding to the reading text based on the at least one user parameter and/or the at least one text parameter.
8. The method as recited in claim 7, wherein said reading device determining configuration music corresponding to said read text based on said at least one user parameter and/or said at least one text parameter comprises:
the reading device determines a first music type based on the at least one user parameter and/or the at least one text parameter;
the reading device determines configuration music corresponding to the reading text from at least one music object in the first music type.
9. The method of claim 8, wherein the speaking device determining a first music type based on the at least one user parameter and/or the at least one text parameter comprises:
the reading device determines the type of the user to be a first user type based on the at least one user parameter; determining a first music type corresponding to the first user type based on a first corresponding relation between the user type and the music type; or
The reading device determines the reading text to be a first text type based on the at least one text parameter; determining a first music type corresponding to the first text type based on a second corresponding relation between the text type and the music type; or
The reading device determines the type of the user to be a first user type based on the at least one user parameter and determines the read text to be a first text type based on the at least one text parameter; and determining a first music type corresponding to the first text type and the first user type based on a third corresponding relation among the user types, the text types and the music types.
10. The method of any of claims 1-9, after the speakable device captures speakable sound emitted by the user based on the speakable text to generate a speakable audio file, the method further comprising:
the reading device obtains an obtaining request used by the user to obtain the reading audio file;
the cloud server executes a second obtaining operation of obtaining money corresponding to the obtaining request from the user-specified account based on the obtaining request;
and after the money is successfully obtained, the reading device transmits the reading audio file to a user-specified device.
11. The method of any of claims 1-9, after the speakable device captures speakable sound emitted by the user based on the speakable text to generate a speakable audio file, the method further comprising:
the reading device saves the reading audio file in a storage unit in the reading device; and/or
And the reading device transmits the reading audio file to the cloud server for storage.
12. The reading system is characterized by comprising a reading device arranged in a carriage building and a cloud server connected with the reading device through a network: wherein,
the cloud server is used for obtaining a login request from the reading device or a user terminal of the user and verifying whether the user is a legal user or not based on the login request;
the cloud server is further used for executing a first obtaining operation of obtaining money corresponding to the login request from the user designated account when the user is a legal user; the user-specified account comprises a payment account associated with the reading account of the user or a payment account specified in the login request; the money corresponding to the login request comprises money of an amount corresponding to the information content in the login request; the information content comprises at least one of duration of time that the user expects to use the reading system, text space that the user expects to use the reading system for reading and identification information; the identification information is used for identifying the grade of the reading device;
the reading device is used for obtaining and displaying the reading text determined by the user when the cloud server verifies that the user is a legal user; collecting reading sound emitted by the user based on the reading text to generate a reading audio file;
the reading device or the cloud server obtains at least one other reading audio file corresponding to the reading text, and evaluates the reading audio file based on the at least one other reading audio file to obtain an evaluation result; the evaluation result is used for representing the reading quality of the user, and comprises at least one of whether the Mandarin is standard, whether the sentence break is accurate and whether the emotion is full; wherein the at least one other speakable audio file is a user-specified speakable audio file of the other user or a highest rated speakable audio file determined by the speakable system.
13. The reading system of claim 12, wherein the reading device is further configured to:
before obtaining and displaying the reading text determined by the user, obtaining at least one attribute parameter of the user; the at least one attribute parameter is specifically the biological characteristic information of the user and/or the environmental parameter of the user;
determining attribute characteristics of the user based on the at least one attribute parameter; the attribute features are specifically physiological features of the user and/or environmental features of the environment where the user is located;
and pushing a text set corresponding to the attribute characteristics according to a preset condition, wherein the text set comprises at least one text, and the text set is used for the user to determine the reading text.
14. The reading system of claim 13, wherein the reading device is configured to:
when the attribute feature is a physiological feature of the user, determining a crowd category to which the user belongs based on the physiological feature;
based on the preset corresponding relation between the crowd categories and the texts, pushing text sets corresponding to the crowd categories to which the users belong to the users.
15. The reading system of claim 13, wherein the reading device is configured to:
when the attribute feature is a physiological feature of the user, determining whether the user is a historical user based on the biological feature;
if so, determining a historical text set of the user based on the historical record of the text selected by the user;
and pushing a text set corresponding to the historical text set to the user.
16. The reading system of claim 13, wherein the reading device is configured to:
when the attribute feature is an environmental feature of the environment where the user is located, determining an environmental category of the environment where the user is located based on the environmental feature;
based on the preset corresponding relation between the environment category and the text, pushing a text set corresponding to the environment category where the user is located to the user.
17. The reading system of claim 12, wherein the reading device is configured to:
obtaining and playing configuration music corresponding to the reading text;
collecting reading sound emitted by a user based on the reading text and configuration music sound corresponding to the configuration music;
generating a speakable audio file including the speakable sound and the configuration music sound.
18. The reading system of claim 17, wherein the reading device is further configured to:
before configuration music corresponding to the reading text is obtained and played, obtaining at least one user parameter related to the user and/or at least one text parameter related to the reading text;
and determining the configuration music corresponding to the reading text based on the at least one user parameter and/or the at least one text parameter.
19. The reading system of claim 18, wherein the reading device is configured to:
determining a first music type based on the at least one user parameter and/or the at least one text parameter;
and determining the configuration music corresponding to the speakable text from at least one music object in the first music type.
20. The reading system of claim 19, wherein the reading device is configured to:
determining the type of the user as a first user type based on the at least one user parameter; determining a first music type corresponding to the first user type based on a first corresponding relation between the user type and the music type; or
Determining the speakable text to be of a first text type based on the at least one text parameter; determining a first music type corresponding to the first text type based on a second corresponding relation between the text type and the music type; or
Determining the type of the user as a first user type based on the at least one user parameter and the speakable text as a first text type based on the at least one text parameter; and determining a first music type corresponding to the first text type and the first user type based on a third corresponding relation among the user types, the text types and the music types.
21. The reading system of any of claims 12-20, wherein the reading device is further configured to:
after collecting the reading sound emitted by the user based on the reading text to generate a reading audio file, obtaining a obtaining request of the user for obtaining the reading audio file;
the cloud server is further used for executing a second obtaining operation of obtaining money corresponding to the obtaining request from the user-specified account based on the obtaining request;
the reading device is also used for transmitting the reading audio file to user-specified equipment after the money is successfully obtained.
22. The reading system of any of claims 12-20, wherein the reading device is further configured to:
after collecting the reading sound emitted by the user based on the reading text to generate a reading audio file, saving the reading audio file in a storage unit of the reading device; and/or
And transmitting the reading audio file to the cloud server for storage.
23. A computer-readable storage medium storing instructions which, when loaded and executed by a processor, implement the sound recording method of any one of claims 1-11.
24. A computer apparatus, comprising:
a processor; and
a storage device storing instructions, coupled to the processor, which when loaded and executed by the processor implement the recording method of any of claims 1-11.
CN201710482595.2A 2017-06-22 2017-06-22 Recording method, reading system, computer readable storage medium and computer device Active CN107451185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710482595.2A CN107451185B (en) 2017-06-22 2017-06-22 Recording method, reading system, computer readable storage medium and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710482595.2A CN107451185B (en) 2017-06-22 2017-06-22 Recording method, reading system, computer readable storage medium and computer device

Publications (2)

Publication Number Publication Date
CN107451185A CN107451185A (en) 2017-12-08
CN107451185B true CN107451185B (en) 2022-03-04

Family

ID=60486842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710482595.2A Active CN107451185B (en) 2017-06-22 2017-06-22 Recording method, reading system, computer readable storage medium and computer device

Country Status (1)

Country Link
CN (1) CN107451185B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299290A (en) * 2018-12-07 2019-02-01 广东小天才科技有限公司 Knowledge graph-based score recommendation method and electronic equipment
CN109710735B (en) * 2018-12-20 2021-01-26 广东小天才科技有限公司 Reading content recommendation method based on multiple social channels and electronic equipment
CN111613252B (en) * 2020-04-29 2021-07-27 广州三人行壹佰教育科技有限公司 Audio recording method, device, system, equipment and storage medium
CN114996512A (en) * 2022-04-28 2022-09-02 北京达佳互联信息技术有限公司 Method, device, electronic equipment, medium and program product for generating reading video

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567472A (en) * 2003-06-17 2005-01-19 张立志 Method for making book capable of speaking
CN101202795A (en) * 2007-11-28 2008-06-18 中国电信股份有限公司 Method and system for audio frequency content user recording
CN102136199A (en) * 2011-03-10 2011-07-27 刘超 On-line electronic book reader and on-line electronic book editor
CN102289956A (en) * 2011-09-16 2011-12-21 张庆 Device for reading electronic book
CN102510384A (en) * 2011-11-23 2012-06-20 深圳市无线开锋科技有限公司 Personal data sharing interactive processing method and server
US8926417B1 (en) * 2012-06-20 2015-01-06 Gabriel E. Pulido System and method for an interactive audio-visual puzzle
CN105304080A (en) * 2015-09-22 2016-02-03 科大讯飞股份有限公司 Speech synthesis device and speech synthesis method
CN106446266A (en) * 2016-10-18 2017-02-22 微鲸科技有限公司 Method for recommending favorite content to user and content recommending system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006123575A1 (en) * 2005-05-19 2006-11-23 Kenji Yoshida Audio information recording device
CN101467184B (en) * 2006-04-13 2013-03-13 Igt公司 Method and apparatus for integrating remote host and local display content on a gaming device
JP5595946B2 (en) * 2011-02-04 2014-09-24 日立コンシューマエレクトロニクス株式会社 Digital content receiving apparatus, digital content receiving method, and digital content transmitting / receiving method
CN106713370B (en) * 2016-05-11 2019-09-27 北京得意音通技术有限责任公司 A kind of identity identifying method, server and mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1567472A (en) * 2003-06-17 2005-01-19 张立志 Method for making book capable of speaking
CN101202795A (en) * 2007-11-28 2008-06-18 中国电信股份有限公司 Method and system for audio frequency content user recording
CN102136199A (en) * 2011-03-10 2011-07-27 刘超 On-line electronic book reader and on-line electronic book editor
CN102289956A (en) * 2011-09-16 2011-12-21 张庆 Device for reading electronic book
CN102510384A (en) * 2011-11-23 2012-06-20 深圳市无线开锋科技有限公司 Personal data sharing interactive processing method and server
US8926417B1 (en) * 2012-06-20 2015-01-06 Gabriel E. Pulido System and method for an interactive audio-visual puzzle
CN105304080A (en) * 2015-09-22 2016-02-03 科大讯飞股份有限公司 Speech synthesis device and speech synthesis method
CN106446266A (en) * 2016-10-18 2017-02-22 微鲸科技有限公司 Method for recommending favorite content to user and content recommending system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
搭建自己心中的朗读亭;胡宇齐;《北京日报》;20170317;1 *

Also Published As

Publication number Publication date
CN107451185A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
US12050574B2 (en) Artificial intelligence platform with improved conversational ability and personality development
CN107451185B (en) Recording method, reading system, computer readable storage medium and computer device
US20180167660A1 (en) Providing content responsive to multimedia signals
CN106790054A (en) Interactive authentication system and method based on recognition of face and Application on Voiceprint Recognition
US11646026B2 (en) Information processing system, and information processing method
CN112148922A (en) Conference recording method, conference recording device, data processing device and readable storage medium
CN109240786B (en) Theme changing method and electronic equipment
CN109462603A (en) Voiceprint authentication method, equipment, storage medium and device based on blind Detecting
KR102318642B1 (en) Online platform using voice analysis results
CN108322770A (en) Video frequency program recognition methods, relevant apparatus, equipment and system
Vestman et al. Who do I sound like? showcasing speaker recognition technology by YouTube voice search
CN105551504A (en) Method and device for triggering function application of intelligent mobile terminal based on crying sound
CN112492400A (en) Interaction method, device, equipment, communication method and shooting method
CN109905381A (en) Self-service interview method, relevant apparatus and storage medium
CN111785280B (en) Identity authentication method and device, storage medium and electronic equipment
US10454914B2 (en) System and method for verifying user supplied items asserted about the user for searching
CN111464519A (en) Account registration method and system based on voice interaction
JP2023034563A (en) Matching system and matching method
KR20150110853A (en) An apparatus and a method of providing an advertisement
CN113139122A (en) Information recommendation method, system and equipment
CN115579017A (en) Audio data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant