CN111930453A - Dictation interaction method and device and electronic equipment - Google Patents

Dictation interaction method and device and electronic equipment Download PDF

Info

Publication number
CN111930453A
CN111930453A CN202010714796.2A CN202010714796A CN111930453A CN 111930453 A CN111930453 A CN 111930453A CN 202010714796 A CN202010714796 A CN 202010714796A CN 111930453 A CN111930453 A CN 111930453A
Authority
CN
China
Prior art keywords
dictation
user
target
terminal
configuration interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010714796.2A
Other languages
Chinese (zh)
Inventor
周骅
张炳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010714796.2A priority Critical patent/CN111930453A/en
Publication of CN111930453A publication Critical patent/CN111930453A/en
Priority to PCT/CN2021/105611 priority patent/WO2022017203A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a dictation interaction method, a dictation interaction device and electronic equipment. One embodiment of the method comprises: the method comprises the steps of displaying a configuration interface, and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier; and sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identification, wherein the dictation participating user performs dictation based on the voice file. Therefore, the dictation initiating user can arrange the dictation tasks on the line, and the dictation process is not limited by space.

Description

Dictation interaction method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a dictation interaction method, apparatus, and electronic device.
Background
Dictation is an important learning way in the learning process, especially in language learning. Traditional dictation needs a teacher and students to read aloud, and the students write out the heard contents. The limitations of conventional dictation are very significant in space or time.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides a dictation interaction method, which is applied to a first terminal, and the method includes: the method comprises the steps of displaying a configuration interface, and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier; and sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identification, wherein the dictation participating user performs dictation based on the voice file.
In a second aspect, an embodiment of the present disclosure provides a dictation interaction apparatus, applied to a first terminal, the apparatus including: the display unit is used for displaying a configuration interface and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier; and the sending unit is used for sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identifier, wherein the dictation participating user carries out dictation based on the voice file.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the dictation interaction method of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the dictation interaction method as described in the first aspect.
According to the dictation interaction method, the dictation interaction device and the electronic equipment, a configuration interface is displayed firstly, so that a dictation initiating user configures target dictation materials and dictation participation user identifications; and then, sending the voice file corresponding to the target dictation material to a dictation participant. Therefore, the dictation initiating user can arrange the dictation tasks on the line, and the dictation process is not limited by space.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of one embodiment of a dictation interaction method according to the present disclosure;
FIG. 2 is a schematic diagram of an exemplary configuration interface according to the present disclosure;
FIG. 3 is an exemplary diagram of a second terminal displaying notification information according to the present disclosure
Fig. 4 is an exemplary diagram of a first terminal displaying dictation progress information according to the present disclosure;
fig. 5 is an exemplary schematic diagram of the first terminal of the present disclosure modifying dictation image information;
FIG. 6 is an exemplary diagram of a second terminal displaying a correction result according to the present disclosure;
FIG. 7 is an exemplary diagram illustrating explanatory information for dictation items in accordance with the present disclosure;
FIG. 8 is a schematic block diagram of one embodiment of a dictation interaction device in accordance with the present disclosure;
FIG. 9 is an exemplary system architecture to which the dictation interaction method of one embodiment of the present disclosure may be applied;
fig. 10 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow diagram of one embodiment of a dictation interaction method in accordance with the present disclosure is shown. The dictation interaction method as shown in fig. 1 comprises the following steps:
step 101, displaying a configuration interface, and determining target dictation materials and dictation participation users through the configuration interface.
In this embodiment, an execution subject (e.g., the first terminal) of the dictation interaction method may present the configuration interface.
In this embodiment, the configuration interface may be used for configuring the target dictation material and the dictation participation user identifier for the dictation initiating user.
Here, the dictation initiating user may be a user initiating dictation, in other words, a user arranging a dictation task.
Here, the target dictation material may be content to be dictated. The specific form of the target dictation material is not limited, and the target dictation material may be, for example, a word, a phrase, a sentence, a paragraph, or the like.
Here, the dictation participating user identification may indicate a user listening to the target dictation material for a writing operation. The number of dictation participation user identifiers may be one, or may be at least two, which is not limited herein.
As an example, the dictation initiating user may be a teacher and the dictation participating users may be students.
In this embodiment, the configuration interface may be one interface or a plurality of interfaces. In other words, configuring the target dictation material and configuring the dictation participation user identification may be performed in the same interface or in different interfaces. If performed in different interfaces, these different interfaces may be collectively referred to as a configuration interface.
And 102, sending the voice file corresponding to the target dictation material to the terminal equipment corresponding to the dictation participation user identification.
In this embodiment, the execution main body may send the voice file corresponding to the target dictation material to the terminal device corresponding to the dictation participation user identifier.
In this embodiment, the dictation participant user indicated by the dictation participant user identifier may perform dictation based on the voice file.
In some application scenarios, the configuration interface may be provided with a dictation initiation confirmation control, and the dictation initiation user may trigger the dictation initiation confirmation control after configuring the target dictation material and the dictation participation user identifier. Then, the execution body may send the voice file corresponding to the target dictation material to the terminal device corresponding to the dictation participant user based on the configuration of the dictation initiating user.
In some application scenarios, the sending of the voice file may be directly sent by the execution main body or indirectly sent by the execution main body. In some implementations, the execution subject may send the target dictation material to a server, and the server may obtain the generated voice file or generate the voice file according to the target dictation material.
Here, the terminal device (which may be referred to as a second terminal device in this application) corresponding to the dictation participation user identifier may play a voice file. And the dictation participating user listens to the voice played by the terminal equipment to perform dictation.
In some optional implementation manners, the second terminal device may start playing after receiving the voice file, or may select a playing time for playing by a dictation participant user.
Referring to FIG. 2, an exemplary schematic diagram of a configuration interface is shown. In fig. 2, a lesson with a lesson title of "two ancient poems" can be selected, and then three dictation items of "green", "silk" and "tao" are automatically presented when a literacy table of "two ancient poems" is selected.
It should be noted that, in the dictation interaction method provided in this embodiment, a configuration interface is first displayed, so that a dictation initiating user configures a target dictation material and a dictation participating user identifier; and then, sending the voice file corresponding to the target dictation material to a dictation participant. Therefore, the dictation initiating user can arrange the dictation tasks on the line, and the dictation process is not limited by space.
In some embodiments, the configuration interface may include at least one of, but is not limited to: candidate dictation content identification and dictation content supplemental area.
Here, the candidate dictation content identification may indicate a pre-stored candidate dictation content.
In some application scenarios, the dictation initiating user may determine the target dictation material by selecting the candidate dictation content identifier. Optionally, the executing body may determine, in response to detecting a first selecting operation for the candidate dictation content identifier, the candidate dictation content indicated by the candidate dictation identifier targeted by the first selecting operation as the target dictation material.
Here, the specific content of the first selection operation is not limited herein.
In some application scenarios, the dictation initiating user may implement configuring the target dictation material by performing a series of selection operations.
In some application scenarios, the user may be initiated by dictating as described above, by selecting disciplines, textbooks, lessons in sequence. The dictation initiating user selects the text identification, and words associated with the text identification in advance can be used as candidate dictation contents for the user to select target dictation materials from the candidate dictation contents.
Please refer to fig. 2, three dictation item identifications of "green", "silk" and "taan"; to be understood as candidate dictation content identification. The dictation initiating user can select target dictation materials from three dictation items of 'green', 'silk' and 'taan'.
It should be noted that, by presetting candidate dictation contents and displaying candidate dictation content identifiers, it is convenient for a dictation initiating user to arrange a dictation task.
In some embodiments, the configuration interface includes a dictation content supplement area. The step 101 may include: and in response to acquiring the dictation supplementary contents input in the dictation supplementary area, determining the dictation supplementary contents as target dictation materials.
In some application scenarios, the dictation initiating user may input desired dictation material in the dictation content supplemental area and then trigger a confirmation control to confirm that the input dictation material is determined to be the material ultimately input to the dictation content supplemental area. The execution body may also use the dictation material input from the dictation content supplement area as the target dictation material.
It should be noted that, by setting the dictation content supplement area, the dictation initiating user can flexibly arrange the dictation task.
Referring to fig. 2, a block below "select dictation" in fig. 2 may be understood as a dictation supplement area. The "scissors" in the dictation content supplement area may be dictation supplement content.
In some embodiments, the dictation mode information may indicate a dictation mode.
In some embodiments, dictation mode information may include, but is not limited to, at least one of: repeated word reading times, word reading interval duration and dictation sequence indication information.
Here, the number of repeated word reads may indicate the number of repeated plays of a single dictation item.
Here, the word read interval duration may indicate a duration between the end of playing a word and the start of playing the next dictation of the dictation.
Here, the dictation sequence indication information may indicate a playing sequence of the dictation items in the dictation material. As an example, the playing sequence may be random playing or sequential playing.
In some application scenarios, if the dictation initiating user sets the out-of-order play, the same out-of-order mode can be adopted at all dictation participating users. The order of the dictation items in the plurality of dictation content images received by the dictation initiating user is consistent, so that the correction efficiency of the dictation initiating user can be improved.
Here, the method may include: and acquiring dictation mode information configured by the dictation initiating user for the target dictation material through the configuration interface.
Here, the terminal device may play the voice file in a dictation mode indicated by the dictation mode information.
It can be understood that different dictation mode information may cause the difficulty of dictation tasks to change. As an example, the greater the number of repeated word reads, the lower the dictation difficulty; the longer the word reading interval is, the lower the dictation difficulty is; sequential dictation is less difficult than out-of-order dictation.
It should be noted that by setting the dictation mode information, the dictation initiating user can flexibly set the dictation mode information conforming to the actual application for the dictation task, that is, flexibly set the difficulty of the dictation task.
Please refer to fig. 2, which shows a schematic view of a scenario for configuring dictation mode information. Word reading intervals, dictation sequence and word reading times can be configured as dictation mode information.
In some embodiments, the target dictation material includes at least one dictation item. Dictation items may be words, sentences or paragraphs. The voice file corresponding to the target dictation material can be determined in the following way: determining whether a voice file corresponding to each dictation item in the target dictation material is generated or not; in response to determining that the dictation item is not generated, synthesizing a voice file corresponding to the dictation item. The synthesized voice file may then be saved to a voice file corresponding to the target dictation material.
The step of determining the voice file corresponding to the target dictation material may be performed by the execution agent, or may be performed by the server supporting the execution agent.
It should be noted that by detecting whether each dictation item has a corresponding language file, and if not, synthesizing a voice file, the workload of a dictation initiating user can be reduced, steps for arranging dictation tasks are reduced, and the efficiency for arranging dictation tasks is improved.
In some embodiments, the second terminal (the terminal device corresponding to the dictation participation user identifier) may receive the dictation notification and may present the dictation notification. The dictation notification is used for notifying the dictation participating users of corresponding dictation tasks.
In some application scenarios, please refer to fig. 3, fig. 3 illustrates a dictation notification presented by a terminal. In fig. 3, the dictation notification may include "teacher has arranged a dictation task for you" and "two ancient poems" and so on, text needs to be dictating and to be in close to finish a bar "; also, a confirmation control (labeled "view immediately" in fig. 3) may be included in the dictation notification.
In some embodiments, the second terminal may present a dictation notification and initiate a dictation process in response to detecting a predefined dictation start operation.
Here, the specific content of the predefined dictation start operation may be set according to an actual application scenario, and is not limited herein.
In some embodiments, the dictation start timing of the second terminal may be when the dictation notification is received, or may be a dictation start time configured by the dictation initiating user, or may be a time selected by the dictation participating user.
As an example, the predefined dictation start operation may include a trigger operation for a dictation notification. In other words, triggering the dictation notification may begin the dictation process.
As an example, after triggering the dictation notification, a dictation start confirmation control may be presented. The dictation participant user triggers a dictation start confirmation control, which can be operated as a predefined dictation start.
As an example, a dictation task viewing portal may be set in the application, and the user may view his or her dictation task from the dictation task viewing portal. The incomplete dictation task can be associated with a dictation start confirmation control. The trigger operation of the dictation participation user on the dictation start confirmation control can be used as a predefined dictation start operation. In other words, the dictation participant user triggers an incomplete dictation task, which can be started.
In some embodiments, the second execution subject may play a voice file and capture user images of dictation participating users in response to a predefined dictation start operation.
Here, the user image of the dictation-participating user may include an image of any human body part of the dictation-participating user during the dictation process. As an example, an image of the upper body of a dictation participant user may be captured, as well as an image of the hand of the dictation participant user writing.
In some embodiments, the dictation participant user may write on the second terminal, or may write with a paper pen. The first execution body may include a camera, and the camera may directly align with a handwriting position of the dictation participant user to perform image acquisition, and may further perform image acquisition on the handwriting position of the dictation participant user through optical path setting (for example, setting a reflective mirror). Namely, the second terminal can receive the written image of the dictation participation user through the camera.
In some embodiments, the second terminal may further receive a writing image of the dictation participant through the display screen.
In some embodiments, if the dictation participant user writes on the second terminal, the second terminal may obtain the writing process of the dictation participant user through a screen recording.
It should be noted that, the second terminal receives the writing content input by the user on the display screen, so that the step of preparing the stylus for dictation by the user can be avoided, and the limitation of the dictation tool on implementation of dictation is reduced. Therefore, the dictation participating user can start dictation anytime and anywhere, and the dictation efficiency is improved.
In some embodiments, the second terminal may transmit the user image of the dictation participation user to a preset electronic device (e.g., a server) in real time. And the second terminal can feed back the dictation progress information to the first terminal in real time. If the first terminal detects the operation of acquiring the user image, the first terminal may pull the stream from the preset electronic device to display the user image (or video).
In some embodiments, the second terminal may collect the dictation content image in response to determining that the dictation is finished.
Here, the specific implementation manner of determining the dictation end may be set according to an actual application scenario, and is not limited herein.
As an example, the second terminal may determine that dictation is ended in response to detecting a trigger operation for a preset dictation ending control.
As an example, the second terminal may determine that dictation is ended in response to a preset dictation duration elapsing from the start of dictation as a starting point. The preset dictation time can be preset according to the target dictation material.
As an example, the second terminal may determine that the dictation is finished in response to that a preset time period has elapsed since the target dictation material has been played for seven points. As an example, the target dictation material may be played for 5 seconds, and the dictation end may be determined.
Here, the second terminal may suspend the image capturing function in response to the dictation end.
Here, the second terminal may present guidance information, for example, "please put the dictation contents in the screen", to guide the dictation participant user to take images of the dictation contents.
In some embodiments, if the dictation participant user writes on the second terminal, the second terminal may obtain a written dictation content image of the dictation participant user.
It should be noted that, in response to determining that the dictation is finished, the collected dictation content image is generally clearer than the image collected in the dictation process. The dictation content image collected after dictation is finished is used as a correction basis of the dictation initiating user, and the reduction of correction efficiency and the inaccuracy of correction caused by the unclear image can be avoided.
In some application scenarios, each dictation task associated with a class may be presented in units of the class. As an example, a chinese dictation task and an english dictation task of three years and one shift may be presented.
In some application scenarios, each dictation task initiated by a dictation initiating user may be presented in units of the dictation initiating user. As an example, for a li teacher, a chinese dictation task initiated by the li teacher for three years and a chinese dictation task initiated by the li teacher for two years may be shown.
In some embodiments, the overall execution progress information of the dictation task may be presented. The overall execution progress information may include at least one of, but is not limited to: the total number of dictation participating users of the dictation task, the number of dictation participating users completing the dictation task, and the number of dictation participating users not completing the dictation task.
In some embodiments, the execution body may present dictation progress information of each dictation participating user.
In some application scenarios, please refer to fig. 4, which illustrates an application scenario in which an execution body presents dictation progress information for which various dictation participates. In fig. 4, for dictation participating user "zhang san", dictation progress information "dictation completed" may be correspondingly presented; for the dictation participated user 'Liqu', dictation progress information 'dictation 20%' can be correspondingly displayed; for the dictation participated user 'Wangwu', the dictation progress information 'completed' can be correspondingly displayed; for the dictation participation user 'Song six', the dictation progress information 'not started' can be correspondingly displayed.
It should be noted that by displaying the dictation progress information of the dictation participating users, the dictation initiating user can obtain the execution condition of the arranged dictation task in time, and thus, the dictation initiating user can remind the dictation participating users who do not start on the basis of the execution condition, so that the dictation interaction efficiency is improved.
In some embodiments, the execution body may acquire and play the image during dictation for a dictation participating user who is in progress or has completed dictation. In other words, for a dictation participant user who is in progress or has completed, the dictation initiating user may view a user image or video of the dictation process.
It should be noted that, through the photographing or video recording function of the second terminal, the user image of the dictation participating user in the dictation process can be recorded. Therefore, the dictation initiating user can check the user image in the dictation process, so that the dictation initiating user can supervise the dictation process in real time or non-real time, and the dictation efficiency is improved.
In some embodiments, there is an association between the user image and the written image acquired at the same point in time; and the method comprises: and displaying the user image and the writing image with the association relationship.
Optionally, the written image may be obtained by the user through handwriting with a pen and a second terminal; optionally, the written image may also be handwritten by the user through a display screen of the second terminal, and the second terminal obtains the handwritten image through a screen recording mode and the like.
It should be noted that, by displaying the user image and the writing image having the association relationship, the actual dictation process can be displayed to the dictation initiating user. Therefore, the dictation initiating user can acquire more user information in the actual dictation process through the writing image and the user image in the actual dictation process, so that the dictation process is effectively supervised.
In some embodiments, the execution body may present a dictation content image and a target dictation material; and generating a correction result of the dictation content image according to the second selection operation in the target dictation material.
It should be noted that displaying the dictation content image and the dictation material in parallel can facilitate the selection of the dictation error part by the job modifying user (usually, the dictation initiating user).
Please refer to fig. 5, which shows a schematic view of a scenario of the wholesale process. In fig. 5, a dictation content image of zhang san is shown, and target dictation material is shown. In the job of zhang san, a "tao" is wrongly written as a "bar", so that the action correcting user can select a "tao" (represented by a shadow) in the target dictation material, and thus a job correcting result can be obtained.
In some embodiments, the method further comprises returning a correction result to the target user to be corrected, wherein the correction result comprises the target dictation material and the error item indication information.
Here, the error item indication information may indicate an error item in the dictation content image.
It should be noted that the correction result is fed back on line based on the target dictation material, and the correction result can be fed back to the dictation participating user in real time, so that the feedback efficiency and the learning effect of the dictation participating user are improved.
Here, the second terminal acquires interpretation information of the dictation item to which the trigger operation is directed in response to detecting the trigger operation for the dictation item, and presents the acquired interpretation information.
Here, the interpretation information may be used to interpret the dictation item. The interpretation information may be stored in advance. By way of example, sources of the interpretation information may include, but are not limited to, at least one of: dictionaries, textbooks, etc.
Referring to fig. 6, which shows a schematic view of a scenario in which the second terminal displays a correction result, in fig. 6, three (i.e., login users of the second terminal) answers of oneself, that is, "my answer" may be displayed. And the correction result can be displayed in various forms, and as an example, the correction result can be displayed in a form of adding error item indication information on the dictation item of the target dictation material.
Referring to fig. 7, a schematic view of a scenario in which the second terminal presents the interpretation information is shown. Three sheets may click on the "tao" shown in the shadow in fig. 6, and then the second terminal may show the explanatory information of the "tao" shown in fig. 6.
It should be noted that, on the basis of the target dictation material, the dictation items in the target dictation material are associated with the interpretation information, so that the dictation participating user can quickly acquire the detailed information of the dictation items, and the learning effect of the dictation participating user can be effectively consolidated in time.
With further reference to fig. 8, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a dictation interaction apparatus, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 8, the dictation interaction apparatus of the present embodiment includes: a presentation unit 801, and a sending unit. The display unit is used for displaying a configuration interface and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier; and the sending unit is used for sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identifier, wherein the dictation participating user carries out dictation based on the voice file.
In this embodiment, specific processing of the presentation unit 801 and the sending unit of the dictation interaction apparatus and technical effects brought by the specific processing can refer to related descriptions of step 101 and step 102 in the corresponding embodiment of fig. 8, which are not described herein again.
In some embodiments, the configuration interface comprises at least one of: candidate dictation content identification and dictation content supplement areas; and the presentation configuration interface, and determining target dictation material and dictation participating users through the configuration interface, comprising: displaying candidate dictation content identifications, and determining candidate dictation content indicated by the candidate dictation identifications targeted by first selection operation as the target dictation material in response to first selection operation for detecting the candidate dictation content identifications; and in response to acquiring the dictation supplementary contents input in the dictation supplementary area, determining the dictation supplementary contents as target dictation materials.
In some embodiments, the configuration interface includes a dictation mode information configuration area; and the apparatus is further configured to: acquiring dictation mode information configured by a dictation initiating user for the target dictation material through the configuration interface, wherein the terminal equipment plays the voice file in a dictation mode indicated by the dictation mode information, and the dictation mode information comprises at least one of the following information: repeated word reading times, word reading interval duration and dictation sequence indication information.
In some embodiments, the target dictation material comprises at least one dictation item, wherein the voice file corresponding to the target dictation material is determined by: determining whether a voice file corresponding to each dictation item in the target dictation material is generated or not; in response to determining that the dictation item is not generated, synthesizing a voice file corresponding to the dictation item.
In some embodiments, the second terminal presents a dictation notification.
In some embodiments, the second terminal plays the voice file in response to a predefined dictation start operation, and captures user images of dictation participating users.
In some embodiments, the second terminal receives a written image of a dictation participant user through a display screen and/or a camera.
In some embodiments, the second terminal captures a dictation content image in response to determining that dictation is complete.
In some embodiments, the apparatus is further configured to: and displaying the dictation progress information of each dictation participating user.
In some embodiments, the apparatus is further configured to: and displaying the user image in the dictation process aiming at the dictation participating users in the dictation process or after the dictation is finished.
In some embodiments, there is an association between the user image and the written image acquired at the same point in time; and the apparatus is further configured to: and displaying the user image and the writing image with the association relationship.
In some embodiments, the apparatus is further configured to: displaying the dictation content image and the target dictation material; and generating a correction result of the dictation content image according to a second selection operation on the dictation items in the target dictation material, wherein the correction result comprises the target dictation material and error item indication information.
In some embodiments, the second terminal presents the correction result, and in response to detecting the trigger operation for the dictation item, acquires and presents interpretation information of the dictation item for which the trigger operation is directed.
Referring to fig. 9, fig. 9 illustrates an exemplary system architecture to which the dictation interaction method of one embodiment of the present disclosure may be applied.
As shown in fig. 9, the system architecture may include terminal devices 901, 902, 903, a network 904, and a server 905. Network 904 is the medium used to provide communication links between terminal devices 901, 902, 903 and server 905. Network 904 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 901, 902, 903 may interact with a server 905 over a network 904 to receive or send messages or the like. The terminal devices 901, 902, 903 may have various client applications installed thereon, such as a web browser application, a search-type application, and a news-information-type application. The client application in the terminal devices 901, 902, and 903 may receive an instruction of the user, and complete a corresponding function according to the instruction of the user, for example, add corresponding information to the information according to the instruction of the user.
The terminal devices 901, 902, 903 may be hardware or software. When the terminal devices 901, 902, 903 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal devices 901, 902, 903 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 905 may be a server providing various services, for example, receiving an information acquisition request sent by the terminal devices 901, 902, and 903, and acquiring the presentation information corresponding to the information acquisition request in various ways according to the information acquisition request. And the relevant data of the presentation information is sent to the terminal equipment 901, 902, 903.
It should be noted that the dictation interaction method provided by the embodiment of the present disclosure may be executed by a terminal device, and accordingly, the dictation interaction apparatus may be disposed in the terminal devices 901, 902, 903. In addition, the dictation interaction method provided by the embodiment of the present disclosure may also be executed jointly by the terminal device and the server 905, and accordingly, the dictation interaction apparatus may be disposed in the terminal device and the server 905.
It should be understood that the number of terminal devices, networks, and servers in fig. 9 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 10, a schematic diagram of an electronic device (e.g., the terminal device or the server of fig. 9) suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic apparatus may include a processing device (e.g., a central processing unit, a graphic processor, etc.) 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage device 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Generally, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 1007 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 1008 including, for example, magnetic tape, hard disk, and the like; and a communication device 1009. The communication apparatus 1009 may allow the electronic device to perform wireless or wired communication with other devices to exchange data. While fig. 10 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 1009, or installed from the storage means 1008, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the method comprises the steps of displaying a configuration interface, and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier; and sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identification, wherein the dictation participating user performs dictation based on the voice file.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation on the unit itself, for example, a presentation unit may also be described as a "unit presenting a configuration interface".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. A dictation interaction method is applied to a first terminal, and comprises the following steps:
the method comprises the steps of displaying a configuration interface, and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier;
and sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identification, wherein the dictation participating user performs dictation based on the voice file.
2. The method of claim 1, wherein the configuration interface comprises at least one of: candidate dictation content identification and dictation content supplement areas; and
the displaying configuration interface and determining target dictation materials and dictation participation users through the configuration interface comprise:
displaying candidate dictation content identifications, and determining candidate dictation content indicated by the candidate dictation identifications targeted by first selection operation as the target dictation material in response to first selection operation for detecting the candidate dictation content identifications;
and in response to acquiring the dictation supplementary contents input in the dictation supplementary area, determining the dictation supplementary contents as target dictation materials.
3. The method of claim 1, wherein the configuration interface comprises a dictation mode information configuration area; and
the method further comprises the following steps:
acquiring dictation mode information configured by a dictation initiating user for the target dictation material through the configuration interface, wherein the terminal equipment plays the voice file in a dictation mode indicated by the dictation mode information, and the dictation mode information comprises at least one of the following information: repeated word reading times, word reading interval duration and dictation sequence indication information.
4. The method of claim 1, wherein the target dictation material comprises at least one dictation item, wherein the voice file corresponding to the target dictation material is determined by:
determining whether a voice file corresponding to each dictation item in the target dictation material is generated or not; in response to determining that the dictation item is not generated, synthesizing a voice file corresponding to the dictation item.
5. The method of claim 1, wherein the second terminal presents a dictation notification.
6. The method of claim 1, wherein the second terminal plays the voice file and captures user images of dictation participating users in response to a predefined dictation start operation.
7. The method of claim 6, wherein the second terminal receives written images of dictation participating users via a display screen and/or a camera.
8. The method of claim 6, wherein the second terminal captures the dictation content image in response to determining that dictation is complete.
9. The method according to claim 1, characterized in that it comprises:
and displaying the dictation progress information of each dictation participating user.
10. The method according to claim 1, characterized in that it comprises:
and displaying the user image in the dictation process aiming at the dictation participating users in the dictation process or after the dictation is finished.
11. The method of claim 10, wherein there is an association between the user image and the written image acquired at the same time point; and
the method comprises the following steps:
and displaying the user image and the writing image with the association relationship.
12. The method of claim 8, further comprising:
displaying the dictation content image and the target dictation material;
and generating a correction result of the dictation content image according to a second selection operation on the dictation items in the target dictation material, wherein the correction result comprises the target dictation material and error item indication information.
13. The method according to claim 12, wherein the second terminal presents the correction result, and in response to detecting the trigger operation for the dictation item, acquires and presents interpretation information of the dictation item for which the trigger operation is directed.
14. A dictation interaction apparatus, adapted for use with a first terminal, the apparatus comprising:
the display unit is used for displaying a configuration interface and determining a target dictation material and a dictation participation user through the configuration interface, wherein the configuration interface is used for a dictation initiating user to configure the target dictation material and a dictation participation user identifier;
and the sending unit is used for sending the voice file corresponding to the target dictation material to a second terminal corresponding to the dictation participating user identifier, wherein the dictation participating user carries out dictation based on the voice file.
15. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-13.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-13.
CN202010714796.2A 2020-07-21 2020-07-21 Dictation interaction method and device and electronic equipment Pending CN111930453A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010714796.2A CN111930453A (en) 2020-07-21 2020-07-21 Dictation interaction method and device and electronic equipment
PCT/CN2021/105611 WO2022017203A1 (en) 2020-07-21 2021-07-09 Dictation interaction method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010714796.2A CN111930453A (en) 2020-07-21 2020-07-21 Dictation interaction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111930453A true CN111930453A (en) 2020-11-13

Family

ID=73315319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010714796.2A Pending CN111930453A (en) 2020-07-21 2020-07-21 Dictation interaction method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN111930453A (en)
WO (1) WO2022017203A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712737A (en) * 2021-01-13 2021-04-27 百度在线网络技术(北京)有限公司 Interaction method, device, equipment and storage medium
CN112817558A (en) * 2021-02-19 2021-05-18 北京大米科技有限公司 Method and device for processing dictation data, readable storage medium and electronic equipment
WO2022017203A1 (en) * 2020-07-21 2022-01-27 北京字节跳动网络技术有限公司 Dictation interaction method and apparatus, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105702246A (en) * 2016-03-17 2016-06-22 广东小天才科技有限公司 Method and device for assisting user for dictation
US20170309275A1 (en) * 2014-11-26 2017-10-26 Panasonic Intellectual Property Corporation Of America Method and apparatus for recognizing speech by lip reading
CN110263334A (en) * 2019-06-06 2019-09-20 深圳市柯达科电子科技有限公司 A kind of method and readable storage medium storing program for executing assisting foreign language learning
CN110490780A (en) * 2019-08-27 2019-11-22 北京赢裕科技有限公司 A kind of method and system assisting verbal learning
CN111079423A (en) * 2019-08-02 2020-04-28 广东小天才科技有限公司 Method for generating dictation, reading and reporting audio, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100443245B1 (en) * 2001-06-19 2004-08-04 김헌동 Method and apparatus for character cognition for overwriting-study
CN102419918A (en) * 2010-12-30 2012-04-18 深圳市高德讯科技有限公司 Method and system for teachers to assign homework and system for students to do homework
CN110309350B (en) * 2018-03-21 2023-09-01 腾讯科技(深圳)有限公司 Processing method, system, device, medium and electronic equipment for recitation tasks
CN111081117A (en) * 2019-05-10 2020-04-28 广东小天才科技有限公司 Writing detection method and electronic equipment
CN111930453A (en) * 2020-07-21 2020-11-13 北京字节跳动网络技术有限公司 Dictation interaction method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170309275A1 (en) * 2014-11-26 2017-10-26 Panasonic Intellectual Property Corporation Of America Method and apparatus for recognizing speech by lip reading
CN105702246A (en) * 2016-03-17 2016-06-22 广东小天才科技有限公司 Method and device for assisting user for dictation
CN110263334A (en) * 2019-06-06 2019-09-20 深圳市柯达科电子科技有限公司 A kind of method and readable storage medium storing program for executing assisting foreign language learning
CN111079423A (en) * 2019-08-02 2020-04-28 广东小天才科技有限公司 Method for generating dictation, reading and reporting audio, electronic equipment and storage medium
CN110490780A (en) * 2019-08-27 2019-11-22 北京赢裕科技有限公司 A kind of method and system assisting verbal learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022017203A1 (en) * 2020-07-21 2022-01-27 北京字节跳动网络技术有限公司 Dictation interaction method and apparatus, and electronic device
CN112712737A (en) * 2021-01-13 2021-04-27 百度在线网络技术(北京)有限公司 Interaction method, device, equipment and storage medium
CN112817558A (en) * 2021-02-19 2021-05-18 北京大米科技有限公司 Method and device for processing dictation data, readable storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2022017203A1 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
US11158102B2 (en) Method and apparatus for processing information
WO2022017203A1 (en) Dictation interaction method and apparatus, and electronic device
US10657834B2 (en) Smart bookmarks
WO2022089192A1 (en) Interaction processing method and apparatus, electronic device, and storage medium
EP4343514A1 (en) Display method and apparatus, and device and storage medium
CN110830362B (en) Content generation method and mobile terminal
WO2023051294A9 (en) Prop processing method and apparatus, and device and medium
EP4192021A1 (en) Audio data processing method and apparatus, and device and storage medium
CN111897976A (en) Virtual image synthesis method and device, electronic equipment and storage medium
WO2023134419A1 (en) Information interaction method and apparatus, and device and storage medium
US10965743B2 (en) Synchronized annotations in fixed digital documents
CN112423107A (en) Lyric video display method and device, electronic equipment and computer readable medium
CN114584716A (en) Picture processing method, device, equipment and storage medium
US20240114106A1 (en) Machine learning driven teleprompter
WO2023134558A1 (en) Interaction method and apparatus, electronic device, storage medium, and program product
CN108391152A (en) Display control method and display control unit
WO2023056850A1 (en) Page display method and apparatus, and device and storage medium
WO2022218109A1 (en) Interaction method and apparatus, electronic device, and computer readable storage medium
CN115941869A (en) Audio processing method and device and electronic equipment
US20140178035A1 (en) Communicating with digital media interaction bundles
CN112162686B (en) House resource information display method and device, electronic equipment and computer readable medium
CN116137662A (en) Page display method and device, electronic equipment, storage medium and program product
CN113132789B (en) Multimedia interaction method, device, equipment and medium
CN111930229B (en) Man-machine interaction method and device and electronic equipment
CN111785104B (en) Information processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113

RJ01 Rejection of invention patent application after publication