CN110075534B - Real-time voice method and device, storage medium and electronic equipment - Google Patents

Real-time voice method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110075534B
CN110075534B CN201910399993.7A CN201910399993A CN110075534B CN 110075534 B CN110075534 B CN 110075534B CN 201910399993 A CN201910399993 A CN 201910399993A CN 110075534 B CN110075534 B CN 110075534B
Authority
CN
China
Prior art keywords
display screen
real
posture information
receiver
included angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910399993.7A
Other languages
Chinese (zh)
Other versions
CN110075534A (en
Inventor
卢娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910399993.7A priority Critical patent/CN110075534B/en
Publication of CN110075534A publication Critical patent/CN110075534A/en
Application granted granted Critical
Publication of CN110075534B publication Critical patent/CN110075534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/572Communication between players during game play of non game information, e.g. e-mail, chat, file transfer, streaming of audio and streaming of video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The disclosure belongs to the technical field of computers, and relates to a real-time voice method and device, a computer readable storage medium and electronic equipment. The method comprises the following steps: acquiring a first included angle between a first display screen and a second display screen; when the included angle between the first display screen and the second display screen is changed from a first included angle to a second included angle, determining the first display screen or the second display screen as a target display screen; and determining a receiver of the real-time voice information according to the target display screen so as to carry out real-time voice with the receiver. On one hand, the real-time voice function is realized according to the characteristics of the mobile terminal, redundant controls are not added on the interactive interface of the mobile terminal, the precious space of a screen is occupied, and the use of other interactive controls is not interrupted or influenced; on the other hand, the receiver of the voice information can be directly selected without further operation, and a more convenient and faster new input dimension is provided for the voice information.

Description

Real-time voice method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a real-time speech method and a real-time speech apparatus, a computer-readable storage medium, and an electronic device.
Background
Currently, a First Person Shooting Game (FPS) and a Multiplayer Online tactical sports Game (MOBA) occupy an important proportion in a Game market, but the FPS, the MOBA and other instant games on the existing mobile devices temporarily do not support a real-time voice communication function of "holding and speaking" in a Battle. In battle, when a player does not wish to carry out long-time continuous voice communication, but the requirement of voice communication exists at a specific game moment, the requirement of the player cannot be met. Although the player can communicate information to some extent by providing the mobile terminal with an open microphone or a function of "voice to text", instant voice communication cannot be realized.
In view of the above, there is a need in the art to develop a new real-time speech method and apparatus.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a real-time voice method, a real-time voice apparatus, a computer-readable storage medium and an electronic device, which overcome the inconvenience of real-time voice communication due to the limitations of the related art at least to a certain extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided a real-time voice method applied to a mobile terminal having a first display screen and a second display screen that are foldably connected, the method including: acquiring a first included angle between the first display screen and the second display screen; when the included angle between the first display screen and the second display screen is changed from the first included angle to a second included angle, determining that the first display screen or the second display screen is a target display screen; and determining a receiver of the real-time voice information according to the target display screen so as to carry out real-time voice with the receiver.
In an exemplary embodiment of the present disclosure, when the included angle between the first display screen and the second display screen is changed from a first included angle to a second included angle, determining that the first display screen or the second display screen is the target display screen includes: acquiring first posture information corresponding to the first display screen based on the first included angle; acquiring second posture information corresponding to the first display screen based on the second included angle; and if the first posture information is different from the second posture information, determining that the first display screen is the target display screen.
In an exemplary embodiment of the present disclosure, if there is a difference between the first posture information and the second posture information, determining that the first display screen is the target display screen includes: if the first posture information is different from the second posture information, determining a first deflection angle of the first display screen according to the first posture information and the second posture information; and if the first deflection angle is larger than a preset angle, determining that the first display screen is the target display screen.
In an exemplary embodiment of the present disclosure, when the included angle between the first display screen and the second display screen is changed from a first included angle to a second included angle, determining that the first display screen or the second display screen is the target display screen includes: acquiring third posture information corresponding to the second display screen based on the first included angle; acquiring fourth posture information corresponding to the second display screen based on the second included angle; and if the third posture information is different from the fourth posture information, determining that the second display screen is the target display screen.
In an exemplary embodiment of the present disclosure, if there is a difference between the third posture information and the fourth posture information, determining that the second display screen is the target display screen includes: if the third posture information is different from the fourth posture information, determining a second deflection angle of the second display screen according to the third posture information and the fourth posture information; and if the second deflection angle is larger than a preset angle, determining that the second display screen is the target display screen.
In an exemplary embodiment of the present disclosure, the determining a receiver of the real-time voice information according to the target display screen includes: if the first display screen is the target display screen, determining that a receiver of the real-time voice information is a first receiver; and if the second display screen is the target display screen, determining that the receiver of the real-time voice information is a second receiver.
In an exemplary embodiment of the present disclosure, the method further comprises: when the receiver of the real-time voice information is determined to be the first receiver, providing a first reminding identifier corresponding to the first receiver; and when the receiver of the real-time voice information is determined to be the second receiver, providing a second reminding identifier corresponding to the second receiver.
In an exemplary embodiment of the disclosure, after the real-time speech with the receiving party, the method further comprises: and when the second included angle is recovered to the first included angle, triggering to close the real-time voice between the receiver and the receiving party.
According to an aspect of the present disclosure, there is provided a real-time voice apparatus, the apparatus including: an included angle acquisition module configured to acquire a first included angle between the first display screen and the second display screen; the screen determining module is configured to determine that the first display screen or the second display screen is a target display screen when an included angle between the first display screen and the second display screen is changed from the first included angle to a second included angle; and the voice sending module is configured to determine a receiver of the real-time voice information according to the target display screen so as to carry out real-time voice with the receiver.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor and a memory; wherein the memory has stored thereon computer readable instructions which, when executed by the processor, implement the real-time speech method of any of the above-described exemplary embodiments.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a real-time speech method in any of the above-described exemplary embodiments.
As can be seen from the foregoing technical solutions, the real-time speech method, the real-time speech apparatus, the computer storage medium and the electronic device in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
in the method and the device provided by the exemplary embodiment of the disclosure, the corresponding real-time voice information receiver is determined by folding different display screens of the mobile terminal, and the real-time voice communication between users is completed. On one hand, the real-time voice function is realized according to the characteristics of the mobile terminal, redundant controls are not added on the interactive interface of the mobile terminal, the precious space of a screen is occupied, and the use of other interactive controls is not interrupted or influenced; on the other hand, the receiver of the voice information can be directly selected without further operation, and a more convenient and faster new input dimension is provided for the voice information.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow chart of a real-time speech method in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of determining a first display screen as a target display screen in an exemplary embodiment of the disclosure;
fig. 3 schematically illustrates a flowchart of a method for determining whether a first display screen is a target display screen in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a flowchart of a method of determining a second display screen as a target display screen in an exemplary embodiment of the disclosure;
fig. 5 schematically illustrates a flowchart of a method for determining whether the second display screen is the target display screen in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method of determining a recipient of real-time voice information in an exemplary embodiment of the disclosure;
FIG. 7 is a flow chart diagram schematically illustrating a method of providing recipient alert identification in an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an application interface for enabling real-time voice functionality via prior art techniques;
FIG. 9 is a schematic diagram of an application interface providing a "speech to text" function in the prior art;
FIG. 10 is a schematic diagram of an application interface for recording voice information in the prior art;
FIG. 11 is a diagram schematically illustrating an application interface of the mobile terminal for not conducting real-time voice in the present exemplary embodiment;
FIG. 12 is a diagram schematically illustrating an application interface for folding the first display screen for real-time speech in the present exemplary embodiment;
FIG. 13 is a diagram schematically illustrating an application interface for folding the second display screen for real-time speech in the present exemplary embodiment;
FIG. 14 is a schematic diagram illustrating a real-time speech device according to an exemplary embodiment of the present disclosure;
FIG. 15 schematically illustrates an electronic device for implementing a real-time speech method in an exemplary embodiment of the present disclosure;
fig. 16 schematically illustrates a computer-readable storage medium for implementing a real-time speech method in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
In order to solve the problems in the related art, the present disclosure provides a real-time voice method applied to a mobile terminal having a first display screen and a second display screen that are foldably connected. Fig. 1 shows a flow chart of a real-time speech method, which, as shown in fig. 1, comprises at least the following steps:
s101, a first included angle between a first display screen and a second display screen is obtained.
S102, when the included angle between the first display screen and the second display screen is changed from a first included angle to a second included angle, determining that the first display screen or the second display screen is a target display screen.
And S103, determining a receiver of the real-time voice information according to the target display screen so as to carry out real-time voice with the receiver.
In an exemplary embodiment of the present disclosure, a corresponding real-time voice information receiver is determined by folding different display screens of a mobile terminal, and real-time voice communication between users is completed. On one hand, the real-time voice function is realized according to the characteristics of the mobile terminal, redundant controls are not added on the interactive interface of the mobile terminal, the precious space of a screen is occupied, and the use of other interactive controls is not interrupted or influenced; on the other hand, the receiver of the voice information can be directly selected without further operation, and a more convenient and faster new input dimension is provided for the voice information.
The steps of the real-time speech method are explained in detail below.
In step S101, a first angle between the first display screen and the second display screen is obtained.
In an exemplary embodiment of the disclosure, the first display screen and the second display screen are two display screens connected in a folding manner, and a connection position may be a hinge connection, or a magnetic attraction manner, or other connection manners, which is not particularly limited in this exemplary embodiment. The first display screen and the second display screen are connected in a folding mode, namely the two display screens can be folded, and the first included angle is the folding angle between the two display screens. In order to obtain the first included angle between the first display screen and the second display screen, a gyroscope can be arranged on the first display screen or the second display screen, and the gyroscope can also be arranged on the first display screen and the second display screen simultaneously to realize the function. The gyroscope is a positioning control system based on free space movement and gestures, can be arranged in the mobile terminal, and can also be a self-contained device of the mobile terminal. In addition, other sensor components disposed in the mobile terminal may be used to implement this function, for example, an accelerometer, a compass, and the like, which is not limited in this exemplary embodiment.
In step S102, when an included angle between the first display screen and the second display screen is changed from a first included angle to a second included angle, it is determined that the first display screen or the second display screen is the target display screen.
In an exemplary embodiment of the present disclosure, an included angle between the two display screens may be monitored in real time by a sensor component such as a gyroscope disposed on the first display screen and/or the second display screen. The angle range of the included angle between the first display screen and the second display screen can be 0-360 degrees. Optionally, an angle range of an included angle between the first display screen and the second display screen is 0 to 180 degrees, 30 to 180 degrees, 60 to 180 degrees, and 90 to 180 degrees, and this is not particularly limited in this exemplary embodiment. The initial included angle value between the first display screen and the second display screen is a first included angle, for example, when the initial state between the first display screen and the second display screen is a horizontal state, the first included angle is 180 degrees. After the first display screen and/or the second display screen are/is folded, the state of the first display screen and/or the state of the second display screen are/is changed, and the included angle value between the first display screen and the second display screen is the second included angle. At this time, it may be determined whether the display screen on which the folding operation is generated is the first display screen or the second display screen, and the display screen on which the folding operation is generated is determined as the target display screen.
In an alternative embodiment, fig. 2 is a flowchart illustrating a method for determining that a first display screen is a target display screen, and as shown in fig. 2, the method may include at least the following steps: in step S201, first posture information corresponding to the first display screen is acquired based on the first angle. The first pose information is information such as the precise 3D coordinates and orientation of the first display screen with respect to the geodetic coordinate system. For example, obtaining the first pose information may be implemented using a gyroscope. The gyroscope is a sensor for measuring a rotational angular velocity of the first display screen, and is also called an angular velocity sensor. When the included angle value between the first display screen and the second display screen is the first included angle, the angular speed of the first display screen can be obtained through the set gyroscope, integration is continuously carried out, and therefore the first posture information of the first display screen can be obtained. In step S202, second posture information corresponding to the first display screen is acquired based on the second angle. The attitude information of the first display screen can be acquired in real time in a gyroscope integration mode. After the first posture information is acquired, the angular speed of the first display screen at every moment is added, so that the posture information of the first display screen can be acquired in real time. After the mobile terminal is folded, the second posture information of the first display screen at the moment can be acquired when a second included angle exists between the first display screen and the second display screen. Besides, the instantaneous attitude information of the first display screen can be acquired in real time by utilizing the accelerometer and the magnetic induction touch sensor. Particularly, magnetic induction sensor can be used for detecting the size of magnetic field intensity, and the accelerometer can real-time supervision first display screen's acceleration, utilizes the wave filter simultaneously can filter because the acceleration that the cell-phone removed the production, remains the acceleration of gravity of first display screen. In summary, the real-time attitude information of the first display screen can be acquired by using the magnetic induction sensor and the accelerometer. Therefore, the second posture information of the first display screen may be acquired in various ways, and this exemplary embodiment is not particularly limited thereto. In step S203, if the first posture information is different from the second posture information, it is determined that the first display screen is the target display screen. When the included angle between the first display screen and the second display screen is changed from the first included angle to the second included angle, it cannot be determined whether the first display screen is folded or the second display screen is folded. To determine whether the display screen that is folded is the first display screen, the acquired first pose information may be compared to the second pose information. If the first posture information is the same as the second posture information, the change of the included angle between the first display screen and the second display screen is not generated by the first display screen; if the first posture information is different from the second posture information, it is indicated that the change of the included angle between the first display screen and the second display screen is generated by the first display screen, and the first display screen can be determined as the target display screen. According to the embodiment, the first display screen is determined to be the target display screen through the posture information difference of the first display screen before and after folding, the folding angle of the first display screen can be obtained in real time, the instantaneity is stronger, and the determination mode is more scientific and accurate.
When the first display screen deflects, it cannot be determined whether the change value of the included angle between the two display screens is caused only by the first display screen or not, and the first display screen is taken as a target display screen. In an alternative embodiment, fig. 3 is a schematic flowchart illustrating a method for determining whether the first display screen is the target display screen, and as shown in fig. 3, the method may include at least the following steps: in step S301, if the first posture information is different from the second posture information, a first deflection angle of the first display screen is determined according to the first posture information and the second posture information. The first pose information of the first display screen is compared with the second pose information. When there is a difference between the first and second attitude information, the result of the comparison of the first attitude information and the second attitude information may be taken as the first deflection angle of the first display screen. For example, if the first posture information of the first display screen has an angle value of 0 ° in the horizontal direction, and the second posture information has an angle value of 30 ° in the horizontal direction, the first deflection angle of the first display screen is 30 °. In step S302, if the first deflection angle is greater than the preset angle, it is determined that the first display screen is the target display screen. The preset angle is set as a condition for judging whether the first display screen can be determined as the target display screen when the first display screen deflects. For example, the first deflection angle is 30 ° and the preset angle is 15 °, and at this time, the first deflection angle is greater than the preset angle, and it can be determined that the first display screen is the target display screen; the first deflection angle is 30 degrees, the preset angle is 45 degrees, and at the moment, the first deflection angle is smaller than the preset angle, so that the first display screen cannot be determined as the target display screen. The preset angle can be dynamically adjusted according to the operation habit of a user or the size of equipment. The determination of whether the first display screen is the target display screen is further described in the exemplary embodiment, so that the accuracy of determining the target display screen is improved, and the determination mode is more humanized.
The target display screen may be a second display screen in addition to the first display screen. In an alternative embodiment, fig. 4 is a flowchart illustrating a method for determining that the second display screen is the target display screen, and as shown in fig. 4, the method may include at least the following steps: in step S401, third posture information corresponding to the second display screen is acquired based on the first angle. The third pose information is information such as the accurate 3D coordinates and orientation of the second display screen with respect to the geodetic coordinate system. For example, obtaining the third pose information may be implemented using a gyroscope. When the included angle value between the first display screen and the second display screen is the first included angle, the angular speed of the second display screen can be obtained through the set gyroscope, integration is continuously carried out, and therefore the third posture information of the second display screen can be obtained. In step S402, fourth posture information corresponding to the second display screen is acquired based on the second angle. The attitude information of the second display screen can be acquired in real time in a gyroscope integration mode. After the third posture information is acquired, the angular speed of the second display screen at every moment is added, so that the posture information of the second display screen can be acquired in real time. After the mobile terminal is folded, the fourth posture information of the second display screen at the moment can be acquired when a second included angle exists between the first display screen and the second display screen. Besides, the instantaneous attitude information of the second display screen can be acquired in real time by utilizing the accelerometer and the magnetic induction touch sensor. Particularly, magnetic induction sensor can be used for detecting the size of magnetic field intensity, and the accelerometer can real-time supervision second display screen's acceleration, utilizes the wave filter simultaneously can filter because the acceleration that the cell-phone removed the production, remains the acceleration of gravity of second display screen. In summary, the magnetic induction sensor and the accelerometer can be used to obtain the real-time attitude information of the second display screen. Therefore, the fourth posture information of the second display screen may be acquired in various ways, and this exemplary embodiment is not particularly limited thereto. In step S403, if there is a difference between the third posture information and the fourth posture information, it is determined that the second display screen is the target display screen. When the included angle between the first display screen and the second display screen is changed from the first included angle to the second included angle, it cannot be determined whether the first display screen is folded or the second display screen is folded. To determine whether the display screen that is folded is the second display screen, the acquired third posture information may be compared with the fourth posture information. If the third posture information is the same as the fourth posture information, the change of the included angle between the first display screen and the second display screen is not generated by the second display screen; if the third posture information is different from the fourth posture information, it is described that the change of the included angle between the first display screen and the second display screen is generated by the second display screen, and the second display screen can be determined as the target display screen. According to the embodiment, the second display screen is determined to be the target display screen through the posture information difference of the second display screen before and after folding, the folding angle of the second display screen can be obtained in real time, the instantaneity is stronger, and the determination mode is more scientific and accurate.
When the second display screen deflects, it cannot be determined whether the change value of the included angle between the two display screens is caused only by the second display screen or not, and the second display screen is used as a target display screen. In an alternative embodiment, fig. 5 is a flowchart illustrating a method for determining whether the second display screen is the target display screen, and as shown in fig. 5, the method may include at least the following steps: in step S501, if the third posture information is different from the fourth posture information, a second deflection angle of the second display screen is determined according to the third posture information and the fourth posture information. And comparing the third posture information of the second display screen with the fourth posture information. When there is a difference between the first and second display screens, the comparison result of the third posture information and the fourth posture information may be used as the second deflection angle of the second display screen. For example, the second posture information of the second display screen has an angle value of 0 ° in the horizontal direction, the fourth posture information has an angle value of 30 ° in the horizontal direction, and then the second deflection angle of the second display screen is 30 °. In step S502, if the second deflection angle is greater than the preset angle, it is determined that the second display screen is the target display screen. And setting a preset angle as a condition for judging whether the second display screen can be determined as the target display screen when the second display screen deflects. For example, the second deflection angle is 30 ° and the preset angle is 15 °, at this time, the second deflection angle is greater than the preset angle, and it can be determined that the second display screen is the target display screen; the second deflection angle is 30 degrees, the preset angle is 45 degrees, and at the moment, the second deflection angle is smaller than the preset angle, so that the second display screen cannot be determined as the target display screen. The preset angle can be dynamically adjusted according to the operation habit of a user or the size of equipment. The determination of whether the first display screen is the target display screen is further described in the exemplary embodiment, so that the accuracy of determining the target display screen is improved, and the determination mode is more humanized.
In step S103, a receiver of the real-time voice message is determined according to the target display screen, so as to perform real-time voice with the receiver.
In the exemplary embodiment of the disclosure, through the real-time voice function on the mobile terminal, the user can record the own voice information in real time and send the voice information to other users, and can also listen to the voice information sent by other users in real time, thereby realizing the function of real-time interaction between different users through the voice information. Wherein, the user listening to the voice information is the receiver of the real-time voice information. Because the first display screen and the second display screen correspond to different receivers of the real-time voice information, when the different display screens are determined as the target display screens, the receivers of the real-time voice information are correspondingly determined. In an alternative embodiment, fig. 6 shows a flowchart of a method for determining a real-time voice information receiver, and as shown in fig. 6, the method may include at least the following steps: in step S601, if the first display screen is the target display screen, it is determined that the receiver of the real-time voice message is the first receiver. When the first display screen is determined to be the target display screen, the current real-time recorded voice information can be determined to be sent to the first receiver according to the mapping relation between the display screen preset by the mobile terminal and the receiver. For example, in a competitive game, the first recipient may be set as a teammate in a team, i.e., the teammates that have opened room together to create the game before the system matches the teammates. If the first display screen is the target display screen, the real-time voice function of the mobile terminal can be started, and the channel is set to be 'team', so that other team members can listen to the real-time voice information. In step S602, if the second display screen is the target display screen, it is determined that the receiver of the real-time voice message is the second receiver. When the second display screen is determined to be the target display screen, the current real-time recorded voice information can be determined to be sent to the second receiver according to the mapping relation between the display screen preset by the mobile terminal and the receiver. For example, in a competitive game, the second receiving party may be set as a team member in the whole team, that is, other team members in the whole team, without distinguishing whether the system is matched. If the second display screen is the target display screen, the real-time voice function of the mobile terminal can be started, and the channel is set to be 'full team', so that other team members can listen to the real-time voice information. The present exemplary embodiment specifically provides a decision rule for activating real-time voice functions of different channels by folding a screen, and when a user needs to perform voice communication with a specific receiver, more selection operations are omitted, so that the user operation process is simplified, and the instantaneity of the real-time voice function is ensured.
In order to facilitate the user who sends the voice message to more intuitively check the identity of the receiver, the reminding identifier can be displayed on the display interface of the mobile terminal. In an alternative embodiment, fig. 7 shows a flowchart of a method for providing a recipient alert identifier, and as shown in fig. 7, the method may include at least the following steps: in step S701, when it is determined that the receiver of the real-time voice information is the first receiver, a first alert identifier corresponding to the first receiver is provided. The first reminding identifier is used for reminding a user who records the voice information of which the receiving party is, and the reminding mode may be that only the identity information of the receiving party is displayed, or that the text information of all the receiving parties is displayed, but the identity information of the current receiving party is displayed in a differentiated manner, or that the identity information of the current receiving party is indicated by a specific pattern of the receiving party, which is not particularly limited in the present exemplary embodiment. It should be noted that the reminding identifier may be displayed at the user end that records the voice message, may also be displayed at the current receiving party, may also be displayed at all receiving parties, and may be set according to actual situations, which is not particularly limited in this exemplary embodiment. In step S702, when the receiver of the real-time voice information is determined to be the second receiver, a second reminder identifier corresponding to the receiver is provided. The second reminding identifier is used for reminding a user who records the voice information, who is the receiver of the voice information, and the reminding manner may be to display only the identity information of the receiver, or to display the identity information of all the receivers, but to display the identity information of the receiver at this time in a differentiated manner, or to indicate the identity information of the receiver at this time with a specific pattern of the receiver, which is not particularly limited in this exemplary embodiment. It should be noted that the reminding identifier may be displayed at the user end that records the voice message, may also be displayed at the current receiving party, may also be displayed at all receiving parties, and may be set according to actual situations, which is not particularly limited in this exemplary embodiment.
After the user recording the voice information finishes the voice communication requirement, the real-time voice function needs to be closed in time. In an optional embodiment, when the second angle is restored to the first angle, the real-time voice between the receiver and the receiver is triggered to be turned off. Like the mode of opening the voice function, the user can fold the folded screen again, so that the included angle between the first display screen and the second display screen is restored to the first included angle in the initial state, and the voice function can be closed. In the exemplary embodiment, a manner similar to turning on the real-time voice function of turning off the real-time voice function is provided, no extra control needs to be added, the screen space of the mobile terminal is saved, and the operation and the grasping are convenient.
The real-time speech method in the embodiment of the present disclosure is described in detail below with reference to an application scenario.
In the prior art, fig. 8 is a schematic diagram of an application interface for starting a real-time voice function by clicking a control, and as shown in fig. 8, a player may start the real-time voice function by clicking a corresponding control of a recipient. Different receivers are provided with microphone-shaped reminding marks and character information, and users can conveniently carry out real-time voice communication in the whole team or team channel. When the user finishes sending the current voice information and needs to close the real-time voice function, the user can click the control of the receiver again to finish the sending.
In the prior art, a user can also start a real-time voice function in a mode of converting voice into text. Fig. 9 is a schematic view of an application interface of the "voice to text" function, and as shown in fig. 9, the "voice to text" function is set in the shortcut message of the game interface. After the player opens the chat panel, click the function of converting voice into text and start recording voice information. Fig. 10 is a schematic diagram of an application interface for recording voice information, and as shown in fig. 10, the mobile terminal may provide a region for recording voice information, where the region may display the remaining duration of the recordable voice information, and may also provide a control for canceling the current recording and completing the current recording, and the like. Then, the recorded voice information can be converted into corresponding text information, and the player can modify the voice information according to the conversion content and decide whether to send the voice information or not.
In the above general real-time voice manner, if the player does not wish to perform a long-time voice communication, and there is a demand for instant voice communication at a specific game time, the player needs to frequently switch controls or perform operations on the chat panel for many times. Further, when a recipient of the voice information is to be selected, a selection operation is required more than once. This is disadvantageous to a game operation in which a player performs a voice communication operation for a long time in a violent match, for a competitive game having a high immediacy and operability.
In view of the above drawbacks of the method, the present disclosure provides a new real-time voice method applied to a mobile terminal having a foldable connection of a first display screen and a second display screen. Fig. 11 shows a schematic view of an application interface of the mobile terminal not performing real-time speech, and as shown in fig. 11, an included angle between the first display screen and the second display screen is 180 °, the whole is in a horizontal state, and the real-time speech function is in a closed state at this time. Fig. 12 is a schematic diagram of an application interface for real-time voice by folding the first display screen, where as shown in fig. 12, the first display screen is a left display screen of the mobile terminal. Folding left display screen to the inboard, and folding angle is greater than and predetermines the angle, can open the real-time voice function between the "team" channel. Fig. 13 is a schematic diagram illustrating an application interface for real-time voice by folding the second display screen, where the second display screen is a right display screen of the mobile terminal, as shown in fig. 13. The display screen on the right is folded inwards, and the folding angle is larger than a preset angle, so that the real-time voice function between channels of the whole team can be started.
In an exemplary embodiment of the present disclosure, a corresponding real-time voice information receiver is determined by folding different display screens of a mobile terminal, and real-time voice communication between users is completed. On one hand, the real-time voice function is realized according to the characteristics of the mobile terminal, redundant controls are not added on the interactive interface of the mobile terminal, the precious space of a screen is occupied, and the use of other interactive controls is not interrupted or influenced; on the other hand, the receiver of the voice information can be directly selected without further operation, and a more convenient and faster new input dimension is provided for the voice information.
Further, in an exemplary embodiment of the present disclosure, a real-time voice device is also provided. Fig. 14 shows a schematic structure of a real-time speech apparatus, and as shown in fig. 14, the real-time speech apparatus 1400 may include: an included angle obtaining module 1401, a screen determining module 1402 and a voice sending module 1403. Wherein:
an included angle obtaining module 1401 configured to obtain a first included angle between the first display screen and the second display screen; a screen determining module 1402 configured to determine the first display screen or the second display screen as a target display screen when an angle between the first display screen and the second display screen is changed from a first angle to a second angle; a voice sending module 1403 configured to determine a receiving party of the real-time voice information according to the target display screen, so as to perform real-time voice with the receiving party.
The details of the real-time speech device are described in detail in the corresponding real-time speech method, and therefore are not described herein again.
It should be noted that although several modules or units of the real-time speech device 1400 are mentioned in the above detailed description, such division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic device 1500 according to such an embodiment of the invention is described below with reference to fig. 15. The electronic device 1500 shown in fig. 15 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 15, electronic device 1500 is in the form of a general purpose computing device. Components of electronic device 1500 may include, but are not limited to: the at least one processing unit 1510, the at least one storage unit 1520, a bus 1530 connecting different system components (including the storage unit 1520 and the processing unit 1510), and a display unit 1540.
Wherein the memory unit stores program code that is executable by the processing unit 1510 to cause the processing unit 1510 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification.
The storage unit 1520 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)1521 and/or a cache memory unit 1522, and may further include a read-only memory unit (ROM) 1523.
The storage unit 1520 may also include a program/utility 1524 having a set (at least one) of program modules 1525, such program modules 1525 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1530 may be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1500 can also communicate with one or more external devices 1700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 1550. Also, the electronic device 1500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1560. As shown, the network adapter 1540 communicates with the other modules of the electronic device 1500 via the bus 1530. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned "exemplary methods" section of the present description, when said program product is run on the terminal device.
Referring to fig. 16, a program product 1600 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (9)

1. A real-time voice method applied to a mobile terminal having a first display screen and a second display screen that are foldably connected, the method comprising:
acquiring a first included angle between the first display screen and the second display screen;
when the included angle between the first display screen and the second display screen is changed from the first included angle to a second included angle,
acquiring first posture information corresponding to the first display screen based on the first included angle;
acquiring second posture information corresponding to the first display screen based on the second included angle;
if the first posture information is different from the second posture information, determining that the first display screen is a target display screen; and/or
Acquiring third posture information corresponding to the second display screen based on the first included angle;
acquiring fourth posture information corresponding to the second display screen based on the second included angle;
if the third posture information is different from the fourth posture information, determining that the second display screen is a target display screen;
and determining a receiver of the real-time voice information according to the target display screen based on the mapping relation between the display screen preset by the mobile terminal and the receiver so as to perform real-time voice with the receiver.
2. The real-time audio method according to claim 1, wherein the determining that the first display screen is the target display screen if the first posture information is different from the second posture information comprises:
if the first posture information is different from the second posture information, determining a first deflection angle of the first display screen according to the first posture information and the second posture information;
and if the first deflection angle is larger than a preset angle, determining that the first display screen is a target display screen.
3. The real-time audio method according to claim 1, wherein the determining that the second display screen is the target display screen if the third posture information is different from the fourth posture information comprises:
if the third posture information is different from the fourth posture information, determining a second deflection angle of the second display screen according to the third posture information and the fourth posture information;
and if the second deflection angle is larger than a preset angle, determining that the second display screen is a target display screen.
4. The real-time voice method according to claim 1, wherein the determining a recipient of the real-time voice message according to the target display screen comprises:
if the first display screen is the target display screen, determining that a receiver of the real-time voice information is a first receiver;
and if the second display screen is the target display screen, determining that the receiver of the real-time voice information is a second receiver.
5. The real-time speech method according to claim 4, further comprising:
when the receiver of the real-time voice information is determined to be the first receiver, providing a first reminding identifier corresponding to the first receiver;
and when the receiver of the real-time voice information is determined to be the second receiver, providing a second reminding identifier corresponding to the second receiver.
6. The real-time speech method according to claim 1, wherein after real-time speech between said receiver and said receiver, said method further comprises:
and when the second included angle is recovered to the first included angle, triggering to close the real-time voice between the receiver and the receiving party.
7. A real-time voice device applied to a mobile terminal having a first display screen and a second display screen which are foldably connected, comprising:
an included angle acquisition module configured to acquire a first included angle between the first display screen and the second display screen;
a screen determination module configured to, when an angle between the first display screen and the second display screen is changed from the first angle to a second angle,
acquiring first posture information corresponding to the first display screen based on the first included angle;
acquiring second posture information corresponding to the first display screen based on the second included angle;
if the first posture information is different from the second posture information, determining that the first display screen is a target display screen; and/or
Acquiring third posture information corresponding to the second display screen based on the first included angle;
acquiring fourth posture information corresponding to the second display screen based on the second included angle;
if the third posture information is different from the fourth posture information, determining that the second display screen is a target display screen;
and the voice sending module is configured to determine a receiver of real-time voice information according to the target display screen based on a mapping relation between a display screen preset by the mobile terminal and the receiver so as to perform real-time voice with the receiver.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a transmitter, carries out the real-time speech method of any one of claims 1-6.
9. An electronic device, comprising:
a transmitter;
a memory for storing executable instructions of the transmitter;
wherein the transmitter is configured to perform the real-time speech method of any one of claims 1-6 via execution of the executable instructions.
CN201910399993.7A 2019-05-14 2019-05-14 Real-time voice method and device, storage medium and electronic equipment Active CN110075534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910399993.7A CN110075534B (en) 2019-05-14 2019-05-14 Real-time voice method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910399993.7A CN110075534B (en) 2019-05-14 2019-05-14 Real-time voice method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110075534A CN110075534A (en) 2019-08-02
CN110075534B true CN110075534B (en) 2022-04-29

Family

ID=67420135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910399993.7A Active CN110075534B (en) 2019-05-14 2019-05-14 Real-time voice method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110075534B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688079B (en) * 2019-09-24 2023-06-06 Oppo广东移动通信有限公司 Interactive control method, interactive control device, storage medium and display device
CN112786036B (en) * 2019-11-04 2023-08-08 海信视像科技股份有限公司 Display device and content display method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728810A (en) * 2017-10-19 2018-02-23 广东欧珀移动通信有限公司 Terminal control method, device, terminal and storage medium
CN107809504A (en) * 2017-11-07 2018-03-16 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of display information
CN108415753A (en) * 2018-03-12 2018-08-17 广东欧珀移动通信有限公司 Method for displaying user interface, device and terminal
CN108762640A (en) * 2018-05-28 2018-11-06 维沃移动通信有限公司 A kind of display methods and terminal of barrage information
CN109542316A (en) * 2018-11-23 2019-03-29 维沃移动通信有限公司 Display methods, terminal and the computer readable storage medium of information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120225694A1 (en) * 2010-10-01 2012-09-06 Sanjiv Sirpal Windows position control for phone applications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728810A (en) * 2017-10-19 2018-02-23 广东欧珀移动通信有限公司 Terminal control method, device, terminal and storage medium
CN107809504A (en) * 2017-11-07 2018-03-16 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of display information
CN108415753A (en) * 2018-03-12 2018-08-17 广东欧珀移动通信有限公司 Method for displaying user interface, device and terminal
CN108762640A (en) * 2018-05-28 2018-11-06 维沃移动通信有限公司 A kind of display methods and terminal of barrage information
CN109542316A (en) * 2018-11-23 2019-03-29 维沃移动通信有限公司 Display methods, terminal and the computer readable storage medium of information

Also Published As

Publication number Publication date
CN110075534A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
US9264245B2 (en) Methods and devices for facilitating presentation feedback
WO2020151516A1 (en) Message sending method and mobile terminal
US20140362003A1 (en) Apparatus and method for selecting object by using multi-touch, and computer readable recording medium
US11169688B2 (en) Message processing method, message viewing method, and terminal
CN104238726B (en) Intelligent glasses control method, device and a kind of intelligent glasses
EP3309670B1 (en) Method for responding to operation track and operation track response apparatus
US10540451B2 (en) Assisted language learning
US10747499B2 (en) Information processing system and information processing method
US20140281962A1 (en) Mobile device of executing action in display unchecking mode and method of controlling the same
WO2020227326A1 (en) Providing user interfaces based on use contexts and managing playback of media
CN110075534B (en) Real-time voice method and device, storage medium and electronic equipment
KR20170012979A (en) Electronic device and method for sharing image content
US20130120249A1 (en) Electronic device
US20220406311A1 (en) Audio information processing method, apparatus, electronic device and storage medium
CN108646994A (en) Information processing method, device, mobile terminal and storage medium
CN110209243A (en) Method and apparatus, the storage medium, electronic equipment of mobile terminal control
CN108073572A (en) Information processing method and its device, simultaneous interpretation system
US20230015943A1 (en) Scratchpad creation method and electronic device
CN109857321A (en) Operating method, mobile terminal based on screen prjection, readable storage medium storing program for executing
EP2991289B1 (en) Electronic device and method for sending messages using the same
CN109388699A (en) Input method, device, equipment and storage medium
JP2016508271A (en) Controllable headset computer display
US11017313B2 (en) Situational context analysis program
EP4170589A1 (en) Music playing method and apparatus based on user interaction, and device and storage medium
US20180239440A1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant