CN111491058A - Method for controlling operation mode, electronic device, and storage medium - Google Patents

Method for controlling operation mode, electronic device, and storage medium Download PDF

Info

Publication number
CN111491058A
CN111491058A CN202010244722.7A CN202010244722A CN111491058A CN 111491058 A CN111491058 A CN 111491058A CN 202010244722 A CN202010244722 A CN 202010244722A CN 111491058 A CN111491058 A CN 111491058A
Authority
CN
China
Prior art keywords
target
video stream
generating device
sound generating
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010244722.7A
Other languages
Chinese (zh)
Inventor
全利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010244722.7A priority Critical patent/CN111491058A/en
Publication of CN111491058A publication Critical patent/CN111491058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72466User interfaces specially adapted for cordless or mobile telephones with selection means, e.g. keys, having functions defined by the mode or the status of the device

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses a control method of a working mode, electronic equipment and a storage medium. The control method of the working mode comprises the following steps: under the condition that the electronic equipment is in a target working state, acquiring a target working mode corresponding to contact information based on the contact information corresponding to the target working state, wherein the target working state is a voice call state or a video call state; and under the condition of receiving the operation of playing the target audio and video stream, controlling the sound generating device corresponding to the target audio and video stream to operate in the target working mode. The embodiment of the invention can determine the target working mode based on the contact information corresponding to the target working state, better meets the requirement of a user on the target working state or the target audio and video stream playing, and improves the user experience.

Description

Method for controlling operation mode, electronic device, and storage medium
Technical Field
The embodiment of the invention relates to the technical field of electronic equipment, in particular to a control method of a working mode, electronic equipment and a storage medium.
Background
With the increasing maturity of communication technology, the voice call state or the video call state is more and more popular among users as an important service on the electronic device, and more users communicate with family, colleagues and friends depending on the voice call state or the video call state.
At present, in the process of a voice call state or a video call state of an electronic device, if a user needs to play a target audio/video stream, the target audio/video stream cannot be played due to incompatibility of the electronic device, and user experience is reduced.
Disclosure of Invention
The embodiment of the invention provides a control method of a working mode, electronic equipment and a storage medium, and aims to solve the problem that target audio and video streams cannot be played in the process of a voice call state or a video call state of the electronic equipment due to incompatibility.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention further provides a method for controlling an operating mode, where the method is applied to an electronic device, and the method for controlling the operating mode includes:
under the condition that the electronic equipment is in a target working state, acquiring a target working mode corresponding to contact information based on the contact information corresponding to the target working state, wherein the target working state is a voice call state or a video call state;
and under the condition of receiving the operation of playing the target audio and video stream, controlling the sound generating device corresponding to the target audio and video stream to operate in the target working mode.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the acquisition module is used for acquiring a target working mode corresponding to the contact information based on the contact information corresponding to the target working state under the condition that the electronic equipment is in the target working state, wherein the target working state is a voice call state or a video call state;
and the control module is used for controlling the sound generating device corresponding to the target audio and video stream to operate in the target working mode under the condition of receiving the operation of playing the target audio and video stream.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method of the first aspect.
In the embodiment of the invention, when the electronic equipment is in the target working state, the target working mode corresponding to the contact information is obtained based on the contact information corresponding to the target working state, then when the electronic equipment plays the target audio and video stream, the sound generating device corresponding to the target audio and video stream is operated in the target working mode, so that the target working mode can be determined based on the contact information, the requirements of a user on the voice call state or the video call state and the playing of the target audio and video stream are better met, and the electronic equipment can play the target audio and video stream in the voice call state or the video call state process, so that the user experience degree is improved.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a flowchart of a method for controlling an operating mode according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a parallel operating mode according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a disabled operating mode according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a voice transfer according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment of the present invention, the electronic Device may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a laptop Computer (L ap Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
Fig. 1 is a flowchart of a method for controlling an operating mode according to an embodiment of the present invention. As shown in fig. 1, the method for controlling the operation mode includes:
step 101: under the condition that the electronic equipment is in a target working state, acquiring a target working mode corresponding to contact information based on the contact information corresponding to the target working state, wherein the target working state is a voice call state or a video call state;
step 102: and under the condition of receiving the operation of playing the target audio and video stream, controlling the sound generating device corresponding to the target audio and video stream to operate in the target working mode.
In the embodiment of the invention, when the electronic equipment is in the target working state, the target working mode corresponding to the contact information is obtained based on the contact information corresponding to the target working state, then when the electronic equipment plays the target audio and video stream, the sound generating device corresponding to the target audio and video stream is operated in the target working mode, so that the target working mode can be determined based on the contact information, the requirements of a user on the voice call state or the video call state and the playing of the target audio and video stream are better met, and in the process of the voice call state or the video call state, the electronic equipment can also play the target audio and video stream, so that the user experience degree is improved.
In some embodiments of the present invention, the contact information of step 101 includes performing one of: the priority information of the contact persons, the number information of the contact persons and the group information of the contact persons.
The priority information of the contact can be determined according to the intimacy degree of the user, namely the chat frequency degree, and the higher the intimacy degree is, the higher the corresponding priority information is; or the user can select and determine the information according to the requirements of the user.
The number information of the contacts refers to the number of contacts who are in the same chat scene at the same time.
The group to which the contact belongs refers to a tag, e.g., a family, classmate, colleague tag.
For example:
priority of contact (important feature point): important contact person, general contact person
Number of contacts information (secondary feature point): two or more persons
Group information (general feature points) to which the contact belongs: such as family, college, colleague, etc
Determining a use scene of a voice call state or a video call state based on the priority of the contacts, the number of the contacts and the group to which the contacts belong;
and determining a target working mode matched with the use scene based on the use scene.
According to the embodiment of the invention, the data for determining the target model is formed by collecting information of the characteristic points, setting important characteristic points, secondary characteristic points and general characteristic points and collecting characteristic point data.
In the process of determining the target model based on the feature points, a deviation principle is based, wherein the deviation principle refers to: according to the historical operation habits of the users, the users have a mode biased selection for any characteristic point, such as a mode biased parallel operation mode of important contacts (namely the parallel operation mode is the operation mode that the sound generating device and/or the sound pickup device corresponding to the target operation state and the sound generating device corresponding to the target audio and video stream operate simultaneously, a forbidden operation mode is the operation mode that the sound generating device corresponding to the target audio and video stream operates or the sound generating device and the sound pickup device corresponding to the target operation state operate), and the common contacts are biased to the forbidden operation mode (namely the forbidden operation mode is the operation mode that the sound generating device corresponding to the target audio and video stream operates or the operation mode that the sound generating device and the sound pickup device corresponding to the target operation state operate); the judgment is carried out according to the collected user characteristic points, so that the operation habits of the user in a certain period can be reflected better. Wherein the sound generating device comprises a device for generating sound on the electronic device, such as an earpiece; the sound pickup means includes means for a user to pick up sound, such as a microphone, on the electronic device.
In the process of determining the target model based on the feature points, the maximum matching priority principle is also used as the basis: and the more patterns of the three feature points are matched with the use habits of the user, the more patterns can be considered to be closer to the real operation habits of the user. Therefore, when a certain characteristic point with the most matched pattern is judged, the system actively sets the pattern for the user.
In the process of determining the target model based on the feature points, the importance priority principle is also used as the basis: the feature points are not flat, and the feature points like the feature points actively classified for the contacts have subjective emotion of the user and can better fit the behavior of the user, so that when the number of the feature points matched with a certain pattern is the same, the importance degree of the feature points is weighted to determine how to set a target model for the electronic equipment of the user.
Correspondingly, in this example, the maximum match precedence principle > importance precedence principle, so: and setting important feature point weight > secondary feature point weight > general feature point weight, and finally taking the feature point with the highest total value weight as a target mode.
Correspondingly, in this example, the corresponding priority information, the number information of the contacts, and the corresponding weight of the group information to which the contacts belong are different for different users, that is, the importance degrees are different.
It should be noted that the weights may be adjusted by obtaining operation habits of the users, that is, the weights corresponding to each user are different, which is just an example and is not described herein again.
In the embodiment of the invention, the requirement of the user on the personalized working scene is determined and met through scene matching, so that the machine learning function is set, the target mode of the user can be determined by collecting according to the characteristic points of the user, the operation of manually selecting the target mode by the user is reduced, and the user experience is improved.
In some embodiments of the invention, the target mode comprises: a parallel mode of operation and a disabled mode of operation.
Correspondingly, in this example, the parallel operation mode is an operation mode in which the sound generating device and/or the sound pickup device corresponding to the target operation state operates simultaneously with the sound generating device corresponding to the target audio-video stream.
The voice call state refers to a voice call, and the video call state refers to a video call.
In fig. 2, the controlling of the sound generating device corresponding to the target audio/video stream in the parallel operating mode includes:
step 201: the electronic equipment is in a target working mode;
step 202: in a parallel mode of operation;
step 203: adjusting the volume of sound output by a sound generating device corresponding to the voice call state or the video call state;
step 204: playing the target audio and video stream;
step 205: the electronic equipment is in a voice call state or a video call state;
step 206: adjusting the volume of the sound output by the sound generating device corresponding to the target audio and video stream;
step 207: and controlling the sound generating device corresponding to the target audio and video stream to operate in a parallel working mode.
Specifically, in the voice call state or the video call state and in the parallel working mode, the user may view the content of the target audio/video stream in the voice call state or the video call state, and may make the sound volume of the sound output by the sound generating device corresponding to the voice call state or the video call state different from the sound volume of the sound output by the sound generating device corresponding to the target audio/video stream.
Wherein, the multiprocess is: the video call, the audio and video stream, the voice microphone and the like relate to the function of audio and video stream transmission, the receiver and the microphone function are parallel channels, a user can randomly check other audio and video streams during the video call, the audio and video streams and the receiver and the microphone are compatible in playing, the user can simultaneously check and answer the audio and video streams, and the problem of conflict caused by the fact that the user wants to check other audio and video streams during the video call is solved.
Correspondingly, in an example, in the parallel operation mode, the method for controlling the operation mode further includes:
receiving a target input of a user;
and responding to the target input, and adjusting the volume output by the sound generating device corresponding to the target working state or the target audio and video stream so as to enable the volume of the sound output by the sound generating device corresponding to the target working state to be different from the volume of the sound output by the sound generating device corresponding to the target audio and video stream.
Specifically, the method comprises the following steps:
receiving a target input of a user;
and responding to the target input, and adjusting the volume of the output of the receiver corresponding to the target working state or the target audio and video stream so as to enable the volume of the output of the receiver corresponding to the voice call state or the video call state to be different from the volume of the output of the receiver corresponding to the video playing.
In the embodiment of the invention, the parallel working mode considers the parallel of a plurality of tracks, and the played mixed sound has interference to users, so that two volume control buttons are arranged to replace the original single multimedia control button in the parallel working mode: a video data volume adjusting key and a current video call volume adjusting key. According to the requirements of the user, the volume of the sound output by the sound generating device is controlled.
In the parallel working mode, when a user is in a voice call state or a video call state, other target audio and video streams can be checked at will, however, since a plurality of sound sources can cause auditory interference to the user, the user can respectively adjust the current voice call state or video call state and the volume of sound output by a sound generating device corresponding to the additionally checked target audio and video streams according to needs and attention through a volume key, and then the mode that the voice call state or video call state and the target audio and video streams coexist can be realized, and various use requirements of the user can be met.
In other embodiments of the present invention, the disabled operation mode is an operation mode in which the sound generating device corresponding to the target audio/video stream operates, or the sound generating device and the sound pickup device corresponding to the target operation state operate.
In fig. 3, controlling the sound generating device corresponding to the target audio/video stream to operate in the disabled operating mode includes:
step 301: the electronic equipment is in a target working mode;
step 302: in a disabled mode;
step 303: the receiver and the microphone corresponding to the voice call state or the video call state are started; closing the receiver corresponding to the target audio/video stream;
step 304: closing the receiver and the microphone corresponding to the voice call state or the video call state; the receiver corresponding to the target audio/video stream is started;
step 305: and controlling the sound generating device corresponding to the target audio and video stream to operate in a forbidden work mode.
Specifically, in the voice call state or the video call state and in the forbidden operating mode, if the user needs to check the target audio/video stream in the voice call state or the video call state, the receiver corresponding to the target audio/video stream is directly forbidden.
In the embodiment of the present invention, disabling the operating mode means: the single process/automatic forbidding is carried out, the optimal level is set according to audio and video, conversation and video conversation which are concerned by a user, and only the current concerned item has the permission of a receiver and a microphone. When a user is looking at a video call, audio and video streams or calls which the user wants to look at are met, and interference is caused to a current video call party (such as a multiparty video conference), the permission of a receiver and a microphone except for a current user execution item can be automatically forbidden by the mode, and the user can conveniently focus on the current concerned item.
In some embodiments of the present invention, the voice call state or the video call state is a voice call state or a video call state in the first application, and the target audio/video stream is an audio/video stream played in the first application.
In other embodiments of the present invention, the voice call state or the video call state is a voice call state or a video call state in the first application, and the target audio/video stream is an audio/video stream played in the second application.
In some embodiments of the present invention, after the sound generating device corresponding to the target audio/video stream is operated in the target operating mode, the method for controlling the operating mode further includes:
detecting whether a target interface of a first application program receives a voice chat message;
in case of receiving the voice chat message, the voice chat message is converted into a text message, and the text message is displayed on the target interface.
In the embodiment of the present invention, in the case that there are two primary modes described above, the primary mode, i.e., the parallel operation mode and the disabled operation mode, may further include a secondary mode. Wherein, the two-stage mode refers to a voice-to-text mode.
Correspondingly, in this example, converting the voice chat message to a text message includes:
acquiring contact person information corresponding to the voice chat message;
and converting the voice chat message into a text message under the condition that the contact information corresponding to the voice chat message is the first target contact information.
Wherein, before displaying the text message on the target interface, the method further comprises:
identifying whether a target keyword is included in the text message;
in the case where the target keyword is included in the text message, the target keyword is displayed in a target direction of the voice chat message.
The step of identifying whether the text message includes the target keyword includes:
acquiring contact person information corresponding to the voice chat message;
and under the condition that the contact information of the voice chat message is the second target contact information, identifying whether the text message comprises the target keyword.
In the related art, the voice translation technology requires a user to press a selection for a long time to convert a segment of voice into text, which is very inconvenient to operate, and is more likely to cause trouble for the user when the user needs to pay attention to video content and check incoming voice information in a voice call state or video call state scene.
In fig. 4, the voice-to-text mode includes:
step 401: the electronic equipment is in a target working state;
step 402: in a target mode of operation;
step 403: in the speech-to-text mode;
step 404: opening a chat window of an application program;
step 405: identifying whether a voice message exists;
step 406: in the case of a voice message, converting the voice message into a text, namely translating the voice message;
step 407: and outputting the text message.
In the video call state, a voice translation mode is started, whether a voice message exists or not is identified, when the voice message is received, the voice message of the chat is automatically converted into characters, and finally a text message is output.
In the embodiment of the invention, in the video call process, the voice translation mode is started, the voice information of the chat is automatically converted into the characters, and the user can conveniently check the voice information in inconvenient scenes such as the voice call state or the video call state.
Correspondingly, in this example, whether to convert the voice message to a text message may be determined by keywords and/or contact importance ratings.
The user does not want to translate all voice messages, but only wants to focus on some key information points, and at this time, the user can preset keywords, such as: the system comprises a background, a voice module, a translation module, a prompt module and a display module, wherein the background can recognize voice content firstly in the translation module, and when important keywords are included, the keywords are displayed beside the voice, and a user can perform next operation according to the prompt.
The user is not very interested in all voice information, and the user can predefine important contacts or general contacts:
important contacts: automatically translating when the voice information of the important contact appears in the chat group; in the single dialog box, the voice information of the opposite party is automatically translated if the opposite party is an important contact.
General contacts: for the voice which does not belong to the important contact, the background can recognize the voice content firstly, when the voice comprises the preset key words, the key words can be displayed beside the voice, and the user clicks to perform similar processing on the voice and the voice of the user-defined important contact, namely voice translation is performed, so that the user can know more specific message content.
The user is provided with a set of preset label library, such as labels containing important, general, emergency and other marks, and the user also provides a user-defined label function, so that the user can set the label library and set the content and color of the labels.
After the user views the voice translation information, the important information can be marked by using the information tag, in addition, the user can also select part of translated characters, the tag can be generated and displayed beside the voice bar, and after the video is finished, the user can review the important event information according to the tag.
In addition, after the voice message is converted into the text message, the user can check the text message, if the user needs to check the voice message, the text message can be returned to the voice message, further, a reduction function key can be set, for example, a selection box appears after the user presses for a long time, when the reduction mark is selected, the text message disappears, the voice display is unread, and the mark processing is performed, wherein the mark processing mode can be color mark (such as red mark) or directly use an information label.
Fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 5, the electronic device 500 includes:
the acquiring module 501 is configured to acquire a target working mode corresponding to contact information based on the contact information corresponding to a target working state when the electronic device is in the target working state, where the target working state is a voice call state or a video call state;
the control module 502 is configured to control the sound generating device corresponding to the target audio/video stream to operate in the target operating mode under the condition that the operation of playing the target audio/video stream is received.
In the embodiment of the invention, when the electronic equipment is in the target working state, the target working mode corresponding to the contact information is obtained based on the contact information corresponding to the target working state, then when the electronic equipment plays the target audio and video stream, the sound generating device corresponding to the target audio and video stream is operated in the target working mode, so that the target working mode can be determined based on the contact information, the requirements of a user on the voice call state or the video call state and the playing of the target audio and video stream are better met, and in the process of the voice call state or the video call state, the electronic equipment can also play the target audio and video stream, so that the user experience degree is improved.
Optionally, the contact information includes at least one of: the priority information of the contact persons, the number information of the contact persons and the group information of the contact persons.
Optionally, the target working mode includes any one of a parallel working mode and a disabled working mode;
the parallel working mode is a working mode in which a sound production device corresponding to a voice call state or a video call state and/or a sound pickup device corresponding to the voice call state or the video call state and a sound production device corresponding to a target audio and video stream operate simultaneously;
the forbidden working mode is the working mode of the sound production device corresponding to the target audio and video stream or the working mode of the sound production device and the sound pickup device corresponding to the voice call state or the video call state.
Optionally, the target operating mode is a parallel operating mode, and the electronic device further includes:
the receiving module is used for receiving target input of a user;
and the adjusting module is used for responding to the target input and adjusting the volume output by the sound generating device corresponding to the target working state or the target audio and video stream so as to enable the volume output by the sound generating device corresponding to the voice call state or the video call state to be different from the volume of the sound output by the sound generating device corresponding to the target audio and video stream.
Optionally, after the target audio/video stream is operated in the target operating mode, the electronic device further includes:
the detection module is used for detecting whether a target interface of the first application program receives a voice chat message;
the conversion module is used for converting the voice chat message into a text message under the condition of receiving the voice chat message;
and the display module is used for displaying the text message on the target interface.
Optionally, the conversion module is specifically configured to:
acquiring contact person information corresponding to the voice chat message;
and converting the voice chat message into a text message under the condition that the contact information of the voice chat message is the first target contact information.
Optionally, before displaying the text message on the target interface, the electronic device further includes:
the identification module is used for identifying whether the text message comprises the target keyword or not;
the display module is specifically configured to: in the case where the target keyword is included in the text message, the target keyword is displayed at the voice chat message.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of fig. 1, and is not described herein again to avoid repetition.
In the embodiment of the invention, when the electronic equipment is in the target working state, the target working mode corresponding to the contact information is obtained based on the contact information corresponding to the target working state, then when the electronic equipment plays the target audio and video stream, the sound generating device corresponding to the target audio and video stream is operated in the target working mode, so that the target working mode can be determined based on the contact information, the requirements of a user on the voice call state or the video call state and the playing of the target audio and video stream are better met, and in the process of the voice call state or the video call state, the electronic equipment can also play the target audio and video stream, so that the user experience degree is improved.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to, when the electronic device is in a target working state, obtain a target working mode corresponding to contact information based on the contact information corresponding to the target working state, where the target working state is a voice call state or a video call state;
and under the condition of receiving the operation of playing the target audio and video stream, controlling the sound generating device corresponding to the target audio and video stream to operate in the target working mode.
In the embodiment of the invention, when the electronic equipment is in the target working state, the target working mode corresponding to the contact information is obtained based on the contact information corresponding to the target working state, then when the electronic equipment plays the target audio and video stream, the sound generating device corresponding to the target audio and video stream is operated in the target working mode, so that the target working mode can be determined based on the contact information, the requirements of a user on the voice call state or the video call state and the playing of the target audio and video stream are better met, and in the process of the voice call state or the video call state, the electronic equipment can also play the target audio and video stream, so that the user experience degree is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the electronic apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the electronic apparatus 600 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the electronic apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 600 or may be used to transmit data between the electronic device 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 may be logically connected to the processor 610 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the control method embodiment of the foregoing working mode, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the control method embodiment of the above working mode, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (16)

1. A control method of working mode is applied to electronic equipment, and is characterized by comprising the following steps:
under the condition that the electronic equipment is in a target working state, acquiring a target working mode corresponding to contact information based on the contact information corresponding to the target working state, wherein the target working state is a voice call state or a video call state;
and under the condition of receiving an operation of playing a target audio and video stream, controlling a sound generating device corresponding to the target audio and video stream to operate in the target working mode.
2. The method of claim 1, wherein the contact information comprises at least one of: the priority information of the contact persons, the number information of the contact persons and the group information of the contact persons.
3. The method of claim 1, wherein the target operating mode comprises any one of a parallel operating mode and a disabled operating mode;
the parallel working mode is a working mode in which the sound generating device and/or the sound pickup device corresponding to the target working state and the sound generating device corresponding to the target audio and video stream run simultaneously;
the forbidden working mode is the working mode of the sound generating device corresponding to the target audio and video stream or the working mode of the sound generating device and the sound pickup device corresponding to the target working state.
4. The method of claim 3, wherein the target operating mode is the parallel operating mode, the method further comprising:
receiving a target input of a user;
and responding to the target input, and adjusting the target working state or the volume output by the sound generating device corresponding to the target audio and video stream so as to enable the volume of the sound output by the sound generating device corresponding to the target working state to be different from the volume of the sound output by the sound generating device corresponding to the target audio and video stream.
5. The method of claim 1, wherein after operating the target audio-video stream in the target operating mode, the method further comprises:
detecting whether a target interface of a first application program receives a voice chat message;
and under the condition that the voice chat message is received, converting the voice chat message into a text message, and displaying the text message on the target interface.
6. The method of claim 5, wherein converting the voice chat message to a text message comprises:
acquiring contact person information corresponding to the voice chat message;
and converting the voice chat message into a text message under the condition that the contact information of the voice chat message is the first target contact information.
7. The method of claim 5, wherein prior to displaying the text message on the target interface, the method further comprises:
identifying whether a target keyword is included in the text message;
the displaying the text message on the target interface includes:
displaying the target keyword at the voice chat message in case the target keyword is included in the text message.
8. An electronic device, comprising:
the acquisition module is used for acquiring a target working mode corresponding to contact information based on the contact information corresponding to the target working state under the condition that the electronic equipment is in the target working state, wherein the target working state is a voice call state or a video call state;
and the control module is used for controlling the sound generating device corresponding to the target audio and video stream to operate in the target working mode under the condition of receiving the operation of playing the target audio and video stream.
9. The electronic device of claim 8, wherein the contact information comprises at least one of: the priority information of the contact persons, the number information of the contact persons and the group information of the contact persons.
10. The electronic device of claim 8, wherein the target operating mode comprises any one of a parallel operating mode and a disabled operating mode;
the parallel working mode is a working mode in which the sound generating device and/or the sound pickup device corresponding to the target working state and the sound generating device corresponding to the target audio and video stream run simultaneously;
the forbidden working mode is the working mode of the sound generating device corresponding to the target audio and video stream or the working mode of the sound generating device and the sound pickup device corresponding to the target working state.
11. The electronic device of claim 10, wherein the target operating mode is the parallel operating mode, the electronic device further comprising:
the receiving module is used for receiving target input of a user;
and the adjusting module is used for responding to the target input and adjusting the target working state or the volume output by the sound generating device corresponding to the target audio and video stream so as to enable the volume of the sound output by the sound generating device corresponding to the target working state to be different from the volume of the sound output by the sound generating device corresponding to the target audio and video stream.
12. The electronic device of claim 8, wherein after operating the target audio-video stream in the target operating mode, the electronic device further comprises:
the detection module is used for detecting whether a target interface of the first application program receives a voice chat message;
the conversion module is used for converting the voice chat message into a text message under the condition of receiving the voice chat message;
a display module to display the text message on the target interface.
13. The electronic device of claim 12, wherein the conversion module is specifically configured to:
acquiring contact person information corresponding to the voice chat message;
and converting the voice chat message into a text message under the condition that the contact information of the voice chat message is the first target contact information.
14. The electronic device of claim 12, wherein prior to displaying the text message on the target interface, the electronic device further comprises:
the identification module is used for identifying whether the text message comprises the target keyword or not;
the display module is specifically configured to: displaying the target keyword at the voice chat message in case the target keyword is included in the text message.
15. An electronic device, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the control method of the operating mode according to any one of claims 1 to 7.
16. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method of controlling the operating mode according to any one of claims 1 to 7.
CN202010244722.7A 2020-03-31 2020-03-31 Method for controlling operation mode, electronic device, and storage medium Pending CN111491058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244722.7A CN111491058A (en) 2020-03-31 2020-03-31 Method for controlling operation mode, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244722.7A CN111491058A (en) 2020-03-31 2020-03-31 Method for controlling operation mode, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN111491058A true CN111491058A (en) 2020-08-04

Family

ID=71794472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244722.7A Pending CN111491058A (en) 2020-03-31 2020-03-31 Method for controlling operation mode, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN111491058A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672088A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Video call method and device
CN113518153A (en) * 2021-04-25 2021-10-19 上海淇玥信息技术有限公司 Method and device for identifying user call response state and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100321464A1 (en) * 2009-06-17 2010-12-23 Jiang Chaoqun Method, device, and system for implementing video call
CN106161724A (en) * 2015-03-23 2016-11-23 阿里巴巴集团控股有限公司 Audio output control method and device
CN107104887A (en) * 2017-06-01 2017-08-29 珠海格力电器股份有限公司 A kind of instant message based reminding method, device and its user terminal
CN107861704A (en) * 2017-10-26 2018-03-30 珠海市魅族科技有限公司 Control method for playing back, device, terminal and readable storage medium storing program for executing
CN108880978A (en) * 2017-05-08 2018-11-23 王正伟 Contact categories management-control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100321464A1 (en) * 2009-06-17 2010-12-23 Jiang Chaoqun Method, device, and system for implementing video call
CN106161724A (en) * 2015-03-23 2016-11-23 阿里巴巴集团控股有限公司 Audio output control method and device
CN108880978A (en) * 2017-05-08 2018-11-23 王正伟 Contact categories management-control method
CN107104887A (en) * 2017-06-01 2017-08-29 珠海格力电器股份有限公司 A kind of instant message based reminding method, device and its user terminal
CN107861704A (en) * 2017-10-26 2018-03-30 珠海市魅族科技有限公司 Control method for playing back, device, terminal and readable storage medium storing program for executing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672088A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Video call method and device
WO2022135411A1 (en) * 2020-12-25 2022-06-30 维沃移动通信有限公司 Video call method and device
CN113518153A (en) * 2021-04-25 2021-10-19 上海淇玥信息技术有限公司 Method and device for identifying user call response state and electronic equipment

Similar Documents

Publication Publication Date Title
CN109525707B (en) Audio playing method, mobile terminal and computer readable storage medium
CN109078319B (en) Game interface display method and terminal
CN108540655B (en) Caller identification processing method and mobile terminal
US20200257433A1 (en) Display method and mobile terminal
CN109874038B (en) Terminal display method and terminal
CN108874352B (en) Information display method and mobile terminal
CN111666009B (en) Interface display method and electronic equipment
CN111370018B (en) Audio data processing method, electronic device and medium
CN109412932B (en) Screen capturing method and terminal
CN108551534B (en) Method and device for multi-terminal voice call
CN110177296A (en) A kind of video broadcasting method and mobile terminal
CN110913067A (en) Information sending method and electronic equipment
WO2020220990A1 (en) Receiver control method and terminal
CN109981904B (en) Volume control method and terminal equipment
WO2021190545A1 (en) Call processing method and electronic device
CN108074574A (en) Audio-frequency processing method, device and mobile terminal
CN107911540A (en) Profile switching method and terminal device
CN109525712A (en) A kind of information processing method, mobile terminal and mobile unit
CN109982273B (en) Information reply method and mobile terminal
CN109949809B (en) Voice control method and terminal equipment
CN111491058A (en) Method for controlling operation mode, electronic device, and storage medium
CN108462794B (en) Information display method and mobile terminal
CN108093119B (en) Strange incoming call number marking method and mobile terminal
CN110913070B (en) Call method and terminal equipment
CN108347527B (en) Incoming call prompting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200804