CN114168045A - Dictation learning method, electronic equipment and storage medium - Google Patents

Dictation learning method, electronic equipment and storage medium Download PDF

Info

Publication number
CN114168045A
CN114168045A CN202110704871.1A CN202110704871A CN114168045A CN 114168045 A CN114168045 A CN 114168045A CN 202110704871 A CN202110704871 A CN 202110704871A CN 114168045 A CN114168045 A CN 114168045A
Authority
CN
China
Prior art keywords
dictation
new word
audio
user
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110704871.1A
Other languages
Chinese (zh)
Inventor
刘永坚
白立华
施其明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Ligong Digital Communications Engineering Co ltd
Original Assignee
Wuhan Ligong Digital Communications Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Ligong Digital Communications Engineering Co ltd filed Critical Wuhan Ligong Digital Communications Engineering Co ltd
Priority to CN202110704871.1A priority Critical patent/CN114168045A/en
Publication of CN114168045A publication Critical patent/CN114168045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9554Retrieval from the web using information identifiers, e.g. uniform resource locators [URL] by using bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the invention discloses a dictation learning method, electronic equipment and a storage medium, which are used for a user to dictating according to a played dictation audio of a new word and displaying a reference answer of the new word after the dictation is finished, thereby effectively improving the learning efficiency. The method provided by the embodiment of the invention comprises the following steps: responding to the operation of entering the dictation page by the first user, and displaying a dictation scene picture; responding to the operation of starting dictation of the first user, and playing new word dictation audio on the dictation scene picture; and after the new word dictation audio is played, displaying a new word reference answer, wherein the new word reference answer is used for the first user to check the new word dictation content.

Description

Dictation learning method, electronic equipment and storage medium
Technical Field
The present invention relates to the field of electronic devices, and in particular, to a dictation learning method, an electronic device, and a storage medium.
Background
In the prior art, the function of learning each operation management in the operator city is not perfect, and the experience of readers is not particularly good.
Disclosure of Invention
The embodiment of the invention provides a dictation learning method, electronic equipment and a storage medium, which are used for a user to dictating according to a played dictation audio of a new word and displaying a reference answer of the new word after the dictation is finished, thereby effectively improving the learning efficiency.
A first aspect of the present application provides a dictation learning method, which may include:
responding to the operation of entering the dictation page by the first user, and displaying a dictation scene picture;
responding to the operation of starting dictation of the first user, and playing new word dictation audio on the dictation scene picture;
and after the new word dictation audio is played, displaying a new word reference answer, wherein the new word reference answer is used for the first user to check the new word dictation content.
Optionally, the playing a new word dictation audio on the dictation scene screen in response to the operation of starting dictation by the first user includes:
displaying a payment component on the dictation screen;
responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component;
responding to the operation of the first user for starting dictation of the new word dictation content component, and playing new word dictation audio on the dictation scene picture.
Optionally, the method further includes:
acquiring related information of the new word dictation;
the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
Optionally, the audio of the pre-set duration of the dictation audio of each new word is a trial listening audio, and the audio of the remaining duration is a paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is payment audio.
Optionally, the displaying a dictation scene picture in response to an operation of the first user entering the dictation page includes:
responding to the operation that a first user enters a dictation page to scan the two-dimensional code, and displaying a dictation scene picture; or the like, or, alternatively,
and responding to the operation of the first user on the dictation component, and displaying a dictation scene picture.
A second aspect of the present application provides an electronic device, which may include:
the display module is used for responding to the operation of the first user entering the dictation page and displaying a dictation scene picture;
the processing module is used for responding to the operation of starting dictation of the first user and playing new word dictation audio on the dictation scene picture;
the display module is further configured to display a new word reference answer after the new word dictation audio is played, where the new word reference answer is used for the first user to check new word dictation content.
Optionally, the display module is further configured to display a payment component on the dictation screen; responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component;
the processing module is specifically configured to respond to an operation of the first user to start dictation of the new word dictation content component, and play a new word dictation audio on the dictation scene picture.
Optionally, the processing module is further configured to obtain information related to the new word dictation;
the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
Optionally, the audio of the pre-set duration of the dictation audio of each new word is a trial listening audio, and the audio of the remaining duration is a paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is payment audio.
Optionally, the display module is specifically configured to respond to an operation of entering a dictation page to scan the two-dimensional code by the first user, and display a dictation scene picture; or the like, or, alternatively,
the display module is specifically used for responding to the operation of the first user on the dictation assembly and displaying a dictation scene picture.
A third aspect of the present application provides an electronic device, which may include:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method according to the first aspect of the application.
Yet another aspect of the embodiments of the present application provides a computer-readable storage medium, comprising instructions, which when executed on a processor, cause the processor to perform the method of the first aspect of the present application.
In another aspect, an embodiment of the present invention discloses a computer program product, which, when running on a computer, causes the computer to execute the method of the first aspect of the present application.
In another aspect, an embodiment of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where when the computer program product runs on a computer, the computer is caused to execute the method according to the first aspect of the present application.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the application, a dictation scene picture is displayed in response to the operation of a first user entering a dictation page; responding to the operation of starting dictation of the first user, and playing new word dictation audio on the dictation scene picture; and after the new word dictation audio is played, displaying a new word reference answer, wherein the new word reference answer is used for the first user to check the new word dictation content. The user can listen and write according to the played new word dictation audio, and can display the new word reference answer after the dictation is finished, so that the learning efficiency is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to the drawings.
FIG. 1 is a schematic diagram of an embodiment of a method for dictation learning in an embodiment of the present application;
FIG. 2A is a schematic diagram of basic information of new words dictation in the embodiment of the present application;
FIG. 2B is a schematic diagram of a dictation original text with addition of new words in the embodiment of the present application;
FIG. 2C is a diagram of a new word dictation table of an embodiment of the present application;
FIG. 3A is a diagram illustrating a dictation component in an embodiment of the present application;
FIG. 3B is a diagram illustrating switching directories according to an embodiment of the present application;
fig. 3C is a schematic diagram of dictation playing performed in the embodiment of the present application;
FIG. 3D is a schematic diagram of an embodiment of the present application showing a new word reference answer;
FIG. 3E is a schematic diagram of a display payment component in an embodiment of the subject application;
FIG. 3F is a schematic illustration of payment made in an embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of an electronic device provided in an embodiment of the present application;
fig. 5 is a schematic diagram of another embodiment of the electronic device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a dictation learning method, electronic equipment and a storage medium, which are used for a user to dictating according to a played dictation audio of a new word and displaying a reference answer of the new word after the dictation is finished, thereby effectively improving the learning efficiency.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
The following further describes the technical solution of the present invention by way of an embodiment, as shown in fig. 1, which is a schematic diagram of an embodiment of a dictation learning method in the embodiment of the present application, and the schematic diagram may include:
101. and responding to the operation of the first user entering the dictation page, and displaying a dictation scene picture.
Optionally, before the displaying the dictation scene picture in response to the first user entering the dictation page, the method further includes: acquiring related information of the new word dictation; the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
For example, basic information of new word dictation may be input at the editing end (which may also be understood as a server). Fig. 2A is a schematic diagram of basic information of new words dictation in the embodiment of the present application. Then, new word dictation audio is acquired, and optionally, batch acquisition is supported. After the first acquisition, the new words can be guided to be added to dictation the original text, so that parents can conveniently check the dictation. Fig. 2B is a schematic diagram of a dictation original text with an added new word in the embodiment of the present application.
Optionally, after obtaining the information related to the new word dictation, operations such as setting a new word dictation catalog, modifying the catalog in batches, and supporting a single audio setting for trial listening may be performed. Fig. 2C is a schematic diagram of a new word dictation table in the embodiment of the present application.
It should be noted that, after the editing end obtains the information related to the new word dictation, and receives an access request sent by a client (i.e., an electronic device in the present application), the editing end may send the information related to the new word dictation to the client. Or the editing end actively sends the related information of new words dictation to the client.
Optionally, the displaying a dictation scene picture in response to the first user entering the dictation page may include:
responding to the operation that a first user enters a dictation page to scan the two-dimensional code, and displaying a dictation scene picture; or the like, or, alternatively,
and responding to the operation of the first user on the dictation component, and displaying a dictation scene picture.
102. Responding to the operation that the first user starts dictation, and playing new word dictation audio on the dictation scene picture.
Optionally, the playing a new word dictation audio on the dictation scene screen in response to the operation of starting dictation by the first user may include: displaying a payment component on the dictation screen; responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component; responding to the operation of the first user for starting dictation of the new word dictation content component, and playing new word dictation audio on the dictation scene picture.
Optionally, the audio of the pre-set duration of the dictation audio of each new word is a trial listening audio, and the audio of the remaining duration is a paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is payment audio.
103. And after the new word dictation audio is played, displaying a new word reference answer, wherein the new word reference answer is used for the first user to check the new word dictation content.
Illustratively, the reader enters the client to listen to the homepage, namely listen to the scene picture, the default is to listen from the first audio, if there are multiple audios, the directory entry will be displayed, and the switching can be performed by clicking the directory. Fig. 3A is a schematic diagram of a component for displaying dictation content in an embodiment of the present application. Fig. 3B is a schematic diagram of switching directories in the embodiment of the present application.
The reader clicks to start dictation, the audio starts to play, and the reader can write down on line according to the new words of the audiogram. When the dictation progress reaches the last 10 seconds, a button of 'write-over, answer' can appear, and the original text of the new word can be checked by clicking. Fig. 3C is a schematic diagram illustrating how to perform dictation playing in the embodiment of the present application. Fig. 3D is a schematic diagram illustrating the new word reference answer in the embodiment of the present application.
If the integral charging is preset, clicking the dictation can jump to a charging interface; if a certain audio audition is set, the audio supports audition, and when the next audition is finished, payment is guided. FIG. 3E is a schematic diagram of a display payment component in an embodiment of the present application. Fig. 3F is a schematic diagram illustrating payment for the embodiment of the present application.
Optionally, after returning from the payment interface, the page continues to display the title of the new word dictation audio.
Alternatively, the guiding case of the directory is to listen and write the directory, otherwise the text is too limited.
In the embodiment of the application, a dictation scene picture is displayed in response to the operation of a first user entering a dictation page; responding to the operation of starting dictation of the first user, and playing new word dictation audio on the dictation scene picture; and after the new word dictation audio is played, displaying a new word reference answer, wherein the new word reference answer is used for the first user to check the new word dictation content. The user can listen and write according to the played new word dictation audio, and can display the new word reference answer after the dictation is finished, so that the learning efficiency is effectively improved.
As shown in fig. 4, a schematic diagram of an embodiment of an electronic device provided in an embodiment of the present application may include:
the display module 401 is configured to display a dictation scene picture in response to an operation of a first user entering a dictation page;
a processing module 402, configured to play a new word dictation audio on the dictation scene picture in response to an operation of starting dictation by the first user;
the display module 401 is further configured to display a new word reference answer after the new word dictation audio is played, where the new word reference answer is used for the first user to check new word dictation content.
Optionally, the display module 401 is further configured to display a payment component on the dictation screen; responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component;
the processing module 402 is specifically configured to respond to an operation that the first user starts dictation on the new word dictation content component, and play a new word dictation audio on the dictation scene picture.
Optionally, the processing module 402 is further configured to obtain information related to the new word dictation;
the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
Optionally, the audio of the pre-set duration of the dictation audio of each new word is a trial listening audio, and the audio of the remaining duration is a paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is payment audio.
Optionally, the display module 401 is specifically configured to respond to an operation of entering a dictation page to scan the two-dimensional code by the first user, and display a dictation scene picture; or the like, or, alternatively,
the display module 401 is specifically configured to display a dictation scene picture in response to an operation of the dictation component by the first user.
As shown in fig. 5, which is a schematic diagram of another embodiment of the electronic device in the embodiment of the present invention, the method may include:
fig. 5 is a block diagram illustrating a partial structure of a mobile phone related to an electronic device provided by an embodiment of the present invention. Referring to fig. 5, the handset includes: radio Frequency (RF) circuitry 510, memory 520, input unit 530, display unit 540, sensor 550, audio circuitry 560, wireless fidelity (Wi-Fi) module 570, processor 580, and power supply 590. Those skilled in the art will appreciate that the handset configuration shown in fig. 5 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 5:
RF circuit 510 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to processor 580; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 510 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 520 may be used to store software programs and modules, and the processor 580 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 520. The memory 520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 520 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 530 may include a touch panel 531 and other input devices 532. The touch panel 531, also called a touch screen, can collect touch operations of a user on or near the touch panel 531 (for example, operations of the user on or near the touch panel 531 by using any suitable object or accessory such as a finger or a stylus pen), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 580, and can receive and execute commands sent by the processor 580. In addition, the touch panel 531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 530 may include other input devices 532 in addition to the touch panel 531. In particular, other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 540 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 540 may include a Display panel 541, and optionally, the Display panel 541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 531 may cover the display panel 541, and when the touch panel 531 detects a touch operation on or near the touch panel 531, the touch panel is transmitted to the processor 580 to determine the type of the touch event, and then the processor 580 provides a corresponding visual output on the display panel 541 according to the type of the touch event. Although the touch panel 531 and the display panel 541 are shown as two separate components in fig. 5 to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 531 and the display panel 541 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 550, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 541 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 541 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 560, speaker 561, and microphone 562 may provide an audio interface between a user and a cell phone. The audio circuit 560 may transmit the electrical signal converted from the received audio data to the speaker 561, and convert the electrical signal into a sound signal by the speaker 561 for output; on the other hand, the microphone 562 converts the collected sound signals into electrical signals, which are received by the audio circuit 560 and converted into audio data, which are then processed by the audio data output processor 580, and then passed through the RF circuit 510 to be sent to, for example, another cellular phone, or output to the memory 520 for further processing.
Wi-Fi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 570, and provides wireless broadband internet access for the user. Although fig. 5 shows the Wi-Fi module 570, it is understood that it does not belong to the essential constitution of the handset and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 580 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 520 and calling data stored in the memory 520, thereby performing overall monitoring of the mobile phone. Alternatively, processor 580 may include one or more processing units; preferably, the processor 580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 580.
The handset also includes a power supply 590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 580 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment of the present invention, the display unit 540 is configured to display a dictation scene picture in response to an operation of a first user entering a dictation page;
a processor 580, configured to play a new word dictation audio on the dictation scene screen in response to an operation of the first user starting dictation;
the display unit 540 is further configured to display a new word reference answer after the new word dictation audio is played, where the new word reference answer is used for the first user to check new word dictation content.
Optionally, the display unit 540 is further configured to display a payment component on the dictation screen; responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component;
the processor 580 is specifically configured to play a new word dictation audio on the dictation scene screen in response to the first user starting dictation operation on the new word dictation content component.
Optionally, the processor 580 is further configured to obtain information related to the new word dictation;
the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
Optionally, the audio of the pre-set duration of the dictation audio of each new word is a trial listening audio, and the audio of the remaining duration is a paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is payment audio.
Optionally, the display unit 540 is specifically configured to respond to an operation of the first user entering the dictation page to scan the two-dimensional code, and display a dictation scene picture; or the like, or, alternatively,
the display unit 540 is specifically configured to display a dictation scene picture in response to an operation of the dictation component by the first user.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of dictation learning, comprising:
responding to the operation of entering the dictation page by the first user, and displaying a dictation scene picture;
responding to the operation of starting dictation of the first user, and playing new word dictation audio on the dictation scene picture;
and after the new word dictation audio is played, displaying a new word reference answer, wherein the new word reference answer is used for the first user to check the new word dictation content.
2. The method according to claim 1, wherein the playing the new word dictation audio on the dictation scene screen in response to the first user initiating the dictation operation comprises:
displaying a payment component on the dictation screen;
responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component;
responding to the operation of the first user for starting dictation of the new word dictation content component, and playing new word dictation audio on the dictation scene picture.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring related information of the new word dictation;
the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
4. The method according to claim 3, wherein the audio of the previous preset duration of the dictation audio of each new word is trial listening audio, and the audio of the remaining duration is paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is payment audio.
5. The method according to claim 1 or 2, wherein the displaying a dictation scene in response to the first user's operation entering a dictation page comprises:
responding to the operation that a first user enters a dictation page to scan the two-dimensional code, and displaying a dictation scene picture; or the like, or, alternatively,
and responding to the operation of the first user on the dictation component, and displaying a dictation scene picture.
6. An electronic device, comprising:
the display module is used for responding to the operation of the first user entering the dictation page and displaying a dictation scene picture;
the processing module is used for responding to the operation of starting dictation of the first user and playing new word dictation audio on the dictation scene picture;
the display module is further configured to display a new word reference answer after the new word dictation audio is played, where the new word reference answer is used for the first user to check new word dictation content.
7. The electronic device of claim 6,
the display module is also used for displaying a payment component on the dictation picture; responding to the payment operation of the first user on the payment component, and displaying a new word dictation content component;
the processing module is specifically configured to respond to an operation of the first user to start dictation of the new word dictation content component, and play a new word dictation audio on the dictation scene picture;
or the like, or, alternatively,
the processing module is further used for acquiring related information of the new word dictation;
the related information of the new word dictation content comprises: basic information of new word dictation, new word dictation audio and new word dictation original text, wherein the new word dictation original text comprises the new word reference answer; wherein the basic information includes: new words dictation title, sale price, operation platform, label, first big picture and second small picture.
8. The electronic device according to claim 6, wherein the audio of the previous preset duration of the dictation audio of each new word is a trial listening audio, and the audio of the remaining duration is a paid audio; alternatively, the first and second electrodes may be,
the audio of the pre-set number in the new word dictation audio is trial listening audio, and the remaining number of the audio is paid audio;
or the like, or, alternatively,
the display module is specifically used for responding to the operation of a first user entering a dictation page to scan the two-dimensional code and displaying a dictation scene picture; or the like, or, alternatively,
the display module is specifically used for responding to the operation of the first user on the dictation assembly and displaying a dictation scene picture.
9. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method of any of claims 1-5.
10. A computer-readable storage medium comprising instructions that, when executed on a processor, cause the processor to perform the method of any of claims 1-5.
CN202110704871.1A 2021-06-24 2021-06-24 Dictation learning method, electronic equipment and storage medium Pending CN114168045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110704871.1A CN114168045A (en) 2021-06-24 2021-06-24 Dictation learning method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110704871.1A CN114168045A (en) 2021-06-24 2021-06-24 Dictation learning method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114168045A true CN114168045A (en) 2022-03-11

Family

ID=80476404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110704871.1A Pending CN114168045A (en) 2021-06-24 2021-06-24 Dictation learning method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114168045A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001290413A (en) * 2000-04-06 2001-10-19 Michiko Takesue Training method for improving foreign language ability
CN108429927A (en) * 2018-02-08 2018-08-21 聚好看科技股份有限公司 The method of virtual goods information in smart television and search user interface
CN111079486A (en) * 2019-05-17 2020-04-28 广东小天才科技有限公司 Method for starting dictation detection and electronic equipment
CN111524045A (en) * 2020-04-13 2020-08-11 北京猿力教育科技有限公司 Dictation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001290413A (en) * 2000-04-06 2001-10-19 Michiko Takesue Training method for improving foreign language ability
CN108429927A (en) * 2018-02-08 2018-08-21 聚好看科技股份有限公司 The method of virtual goods information in smart television and search user interface
CN111079486A (en) * 2019-05-17 2020-04-28 广东小天才科技有限公司 Method for starting dictation detection and electronic equipment
CN111524045A (en) * 2020-04-13 2020-08-11 北京猿力教育科技有限公司 Dictation method and device

Similar Documents

Publication Publication Date Title
CN111294638B (en) Method, device, terminal and storage medium for realizing video interaction
WO2016169465A1 (en) Method, apparatus and system for displaying screen information
EP3015978A1 (en) Gesture-based conversation processing method, apparatus, and terminal device
CN107193664B (en) Message display method and device and mobile terminal
CN108156508B (en) Barrage information processing method and device, mobile terminal, server and system
CN103347003B (en) A kind of Voice over Internet method, Apparatus and system
CN104426919A (en) Page sharing method, device and system
CN103313139A (en) History display method and device and electronic device
CN106293738B (en) Expression image updating method and device
CN107103074B (en) Processing method of shared information and mobile terminal
CN106791916B (en) Method, device and system for recommending audio data
CN104954159A (en) Network information statistics method and device
CN104809055B (en) Application program testing method and device based on cloud platform
JP6915074B2 (en) Message notification method and terminal
CN106228994B (en) A kind of method and apparatus detecting sound quality
CN106339402B (en) Method, device and system for pushing recommended content
CN105159655B (en) Behavior event playing method and device
CN110865743A (en) Task management method and terminal equipment
CN108804434B (en) Message query method, server and terminal equipment
CN103079047B (en) A kind of method of parameter adjustment and terminal
US20140380198A1 (en) Method, device, and terminal apparatus for processing session based on gesture
CN108235047B (en) Audio playing method of live broadcast room and anchor terminal equipment
CN112101215A (en) Face input method, terminal equipment and computer readable storage medium
CN114168045A (en) Dictation learning method, electronic equipment and storage medium
CN107800880B (en) Method, device, mobile terminal and storage medium for displaying number of unread messages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination