CN111475133A - Method, device and equipment for synchronizing sound state and storage medium - Google Patents

Method, device and equipment for synchronizing sound state and storage medium Download PDF

Info

Publication number
CN111475133A
CN111475133A CN202010353843.5A CN202010353843A CN111475133A CN 111475133 A CN111475133 A CN 111475133A CN 202010353843 A CN202010353843 A CN 202010353843A CN 111475133 A CN111475133 A CN 111475133A
Authority
CN
China
Prior art keywords
state
sound
intelligent device
sound state
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010353843.5A
Other languages
Chinese (zh)
Inventor
黄振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202010353843.5A priority Critical patent/CN111475133A/en
Publication of CN111475133A publication Critical patent/CN111475133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application provides a method, a device, equipment and a storage medium for synchronizing sound states, wherein the method comprises the following steps: when a sound state adjusting instruction is detected in the screen projection process of the first intelligent device and the second intelligent device, the sound state of the first intelligent device is adjusted to be a target state according to the sound state adjusting instruction, and a sound state synchronization instruction is sent to the second intelligent device, so that the second intelligent device adjusts the sound state to be the target state according to the sound state synchronization instruction, and therefore it can be guaranteed that the sound states of the first intelligent device and the second intelligent device are kept synchronous in the screen projection process of the first intelligent device and the second intelligent device.

Description

Method, device and equipment for synchronizing sound state and storage medium
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a method, a device, equipment and a storage medium for synchronizing sound states.
Background
With the development of science and technology, the application of screen projection technology is more and more extensive. The screen projection technology is characterized in that a screen of one intelligent device is mirrored on a screen of another intelligent device; for example, a screen of a Personal Computer (PC) may be projected onto a smart tablet.
The following takes the screen of a PC projected onto a smart tablet as an example to introduce the problems of the prior art: when the intelligent tablet end is adjusted to be in a mute state, the PC end cannot sense that the intelligent tablet end is adjusted to be in the mute state, and sound can be played continuously; when the PC end is adjusted to the mute state, the intelligent tablet end cannot sense that the PC end is adjusted to the mute state, so that the PC end cannot play sound due to a failure.
Therefore, in the existing screen projection technology, the sound state between two intelligent devices cannot be synchronized.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for synchronizing sound states, and is used for solving the problem that the sound states cannot be synchronized between two intelligent devices in the existing screen projection technology.
In a first aspect, an embodiment of the present application provides a method for synchronizing sound states, where the method is applied to a first smart device, and the method includes:
detecting a sound state adjusting instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a possible implementation manner, the sending a sound state synchronization instruction to the second smart device includes:
sending a Human Interface Device (HID) keyboard instruction to the second intelligent device through an intermediate Integrated Circuit (IC) chip in the first intelligent device; wherein the HID keyboard instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a possible implementation manner, the sending a sound state synchronization instruction to the second smart device includes:
sending the sound state synchronization instruction to an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
In one possible implementation, the method further includes:
and adjusting the sound state prompt message in the first intelligent device according to the sound state adjustment instruction.
In a first aspect, an embodiment of the present application provides a method for synchronizing sound states, where the method is applied to a second smart device, and the method includes:
receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
and adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
In a possible implementation manner, the receiving a sound state synchronization instruction sent by a first smart device includes:
receiving a human-computer interface device HID keyboard instruction sent by an intermediate integrated circuit IC chip in the first intelligent device; wherein the HID keyboard instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a possible implementation manner, the receiving a sound state synchronization instruction sent by a first smart device includes:
receiving the sound state synchronization instruction sent by the first intelligent device through an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
In one possible implementation, the method further includes:
and adjusting the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
In a third aspect, an embodiment of the present application provides a device for synchronizing sound states, where the device is applied to a first smart device, and the device includes:
the detection module is used for detecting a sound state adjustment instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
the first adjusting module is used for adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
the sending module is used for sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a fourth aspect, an embodiment of the present application provides an apparatus for synchronizing sound states, where the apparatus is applied to a second smart device, and the apparatus includes:
the receiving module is used for receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
and the first adjusting module is used for adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
In a fifth aspect, an embodiment of the present application provides a first smart device, including: a first processing chip and a second processing chip;
wherein the first processing chip is configured to:
detecting a sound state adjusting instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
the second processing chip is configured to: sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In one possible implementation, the first processing chip is further configured to:
and adjusting the sound state prompt message in the first intelligent device according to the sound state adjustment instruction.
In a sixth aspect, an embodiment of the present application provides a second smart device, including: a first processing chip and a second processing chip;
wherein the second processing chip is configured to: receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
the first processing chip is configured to: and adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
In one possible implementation, the first processing chip is further configured to:
and adjusting the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
In a seventh aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the first aspect above or the method of any of the second aspect above when executing the computer program.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium, in which computer-executable instructions are stored, and when executed by a processor, the computer-executable instructions are configured to implement the method of any one of the above first aspects or the method of any one of the above second aspects.
In the sound state synchronization method, device, equipment and storage medium provided by the embodiments of the present application, when a sound state adjustment instruction is detected during screen projection of the first intelligent device and the second intelligent device, the sound state of the first intelligent device is adjusted to a target state according to the sound state adjustment instruction, and a sound state synchronization instruction is sent to the second intelligent device, so that the second intelligent device adjusts the sound state to the target state according to the sound state synchronization instruction, thereby ensuring that the sound states of the first intelligent device and the second intelligent device are kept synchronized during screen projection of the first intelligent device and the second intelligent device.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a flowchart illustrating a method for synchronizing sound states according to an embodiment of the present application;
FIG. 3 is a first schematic structural diagram of a system for synchronizing audio states according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a system for synchronizing sound states according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method for synchronizing sound states according to another embodiment of the present application;
FIG. 6 is a flowchart illustrating a method for synchronizing sound states according to another embodiment of the present application;
FIG. 7 is a schematic structural diagram of a device for synchronizing audio states according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a device for synchronizing sound states according to another embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a first smart device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, an application scenario and a part of vocabulary related to the embodiments of the present application will be described.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application, and as shown in fig. 1, the application scenario in the embodiment of the present application may include, but is not limited to: a first smart device 10 and a second smart device 11; wherein, the first smart device 10 can project screen with the second smart device 11.
It should be understood that the first smart device 10 may project a screen to the second smart device 11, or the second smart device 11 may project a screen to the first smart device (it should be noted that fig. 1 illustrates that the second smart device 11 projects a screen to the first smart device 10 as an example), and the specific screen projecting direction may be determined according to actual situations of the first smart device 10 and the second smart device 11.
For example, any smart device (e.g., the first smart device, or the second smart device) referred to in the embodiments of the present application may include, but is not limited to, any of the following: mobile phones, notebook computers, tablet computers (or referred to as smart tablets), desktop computers, PCs, smart televisions.
For example, if the first smart device 10 is a PC and the second smart device 11 is a tablet computer, the first smart device 10 may project a screen to the second smart device 11.
For another example, if the first smart device 10 is a tablet computer and the second smart device 11 is a PC, the second smart device 11 may project a screen to the first smart device 10.
As shown in fig. 1, the second smart device 11 may send a display signal to the first smart device 10, so that the first smart device 10 displays in the display interface according to the display signal; when detecting a touch operation of a user on an icon displayed in a display interface, the first smart device 10 may send a touch signal corresponding to the touch operation to the second smart device 11, so that the second smart device 11 updates the display signal sent to the first smart device 10 according to the touch signal, and thus display content corresponding to the touch operation may be displayed in the display interface of the first smart device 10.
In the prior art, when a screen of a PC is projected to an intelligent tablet, when an intelligent tablet end is adjusted to be in a mute state, the PC end cannot sense that the intelligent tablet end is adjusted to be in a static state, and sound can be continuously played; when the PC end is adjusted to the mute state, the intelligent tablet end cannot sense that the PC end is adjusted to the mute state, so that the PC end cannot play sound due to a failure. It can be seen that in the existing screen projection technology, the two smart devices cannot synchronize sound states (which may include, but are not limited to, a mute state or an un-mute state).
Aiming at the technical problem that the sound state between two intelligent devices cannot be synchronized in the existing screen projection technology, in the embodiment of the application, when any intelligent device detects a sound state adjusting instruction in the screen projection process of the two intelligent devices, the intelligent device can not only adjust the sound state of the intelligent device, but also send the sound state synchronizing instruction to opposite-end equipment (namely, another intelligent device) so that the opposite-end equipment can adjust the sound state of the opposite-end equipment to be the same as the sound state of the intelligent device, and therefore the sound state synchronization of the two intelligent devices is ensured; among other things, sound states may include, but are not limited to: a mute state or a non-mute state.
The target sound state related to the embodiments of the present application may include, but is not limited to: a mute state or a non-mute state.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart illustrating a method for synchronizing sound states according to an embodiment of the present application. The execution subject of the embodiment of the application may be the first intelligent device or a synchronization apparatus of the sound state in the first intelligent device. The synchronization means of the sound state in the first smart device may be implemented by software and/or hardware, for example. As shown in fig. 2, a method for synchronizing sound states provided by an embodiment of the present application may include:
step S201, detecting a sound state adjusting instruction in the screen projection process of the first intelligent device and the second intelligent device.
In this step, in the screen projection process of the first smart device and the second smart device, whether a sound state adjustment instruction is received or not may be detected in real time, and when the sound state adjustment instruction is detected, the step S202 is continuously executed. The sound state adjustment instruction is used to instruct to adjust the sound state of the first smart device to a target state, where the target state may include but is not limited to: a mute state or a non-mute state.
It should be understood that if the sound state of the first smart device is in a mute state before the sound state adjustment instruction is detected, the target state may be in an un-mute state (or exiting the mute state); if the sound state of the first smart device is in the non-mute state before the sound state adjustment instruction is detected, the target state may be a mute state (or an open mute state).
In a possible implementation manner, if it is detected that a user presses or clicks a mute button of the first smart device, it may be determined that a sound state adjustment instruction is detected; wherein, the mute button may include but is not limited to: a physical key, or a virtual key in the first smart device.
In another possible implementation manner, if it is detected that a User operates a mute key icon of the first smart device through a User Interface (UI) Interface, it may be determined that a sound state adjustment instruction is detected.
In another possible implementation manner, if it is detected that the user operates the mute key of the first smart device through the remote control device, it may be determined that the sound state adjustment instruction is detected.
Of course, the sound state adjustment instruction may also be detected in other manners, which is not limited in the embodiment of the present application.
Step S202, adjusting the sound state of the first intelligent device to be a target state according to the sound state adjusting instruction.
In this step, the sound state of the first smart device is adjusted to the target state according to the sound state adjustment instruction detected in step S201.
For example, if the sound state of the first smart device is a mute state before the sound state adjustment instruction is detected, the sound state of the first smart device may be adjusted to a target state according to the sound state adjustment instruction, where the target state may be an un-mute state, so that the first smart device exits the mute state.
For example, if the sound state of the first smart device is not a mute state before the sound state adjustment instruction is detected, the sound state of the first smart device may be adjusted to a target state according to the sound state adjustment instruction, where the target state may be a mute state, so that the first smart device turns on the mute state.
Optionally, according to the sound state adjustment instruction, the sound state prompt information in the first intelligent device may also be adjusted, so as to remind the user that the sound state of the first intelligent device changes.
For example, if the sound state of the first smart device is a mute state or the target state is an un-mute state before the sound state adjustment instruction is detected, the sound state prompt information in the first smart device obtained after the adjustment according to the sound state adjustment instruction is used to indicate that the sound state of the first smart device is an un-mute state, for example, a "mute" icon may be hidden, or an "exit mute state" may be displayed.
For example, if the sound state of the first smart device is in a mute state before the sound state adjustment instruction is detected, the sound state prompt information in the first smart device obtained after the adjustment according to the sound state adjustment instruction is used to indicate that the sound state of the first smart device is in a mute state.
Step S203, sending a sound state synchronization instruction to the second intelligent device; and the sound state synchronization instruction is used for instructing the second intelligent equipment to adjust the sound state to the target state.
In this step, a sound state synchronization instruction may be sent to the second smart device, so that the second smart device adjusts the sound state to the target state according to the sound state synchronization instruction, and the sound states of the first smart device and the second smart device may be kept synchronized.
It should be noted that, in the embodiment of the present application, keeping the sound states of the first smart device and the second smart device synchronized refers to: when the sound state of any one of the first intelligent device and the second intelligent device is adjusted, the other intelligent device correspondingly adjusts the sound state to be the same as the sound state of any one of the intelligent devices.
The following embodiments of the present application describe an implementation manner of sending the sound state synchronization instruction to the second smart device in step S203.
In a possible implementation manner, if a second smart Device projects a screen to a first smart Device, a Human Interface Device (HID) keyboard instruction is sent to the second smart Device through an Integrated Circuit (IC) chip in the first smart Device; and the HID keyboard instruction is used for instructing the second intelligent equipment to adjust the sound state to the target state.
Fig. 3 is a schematic structural diagram of a system for synchronizing sound states according to an embodiment of the present disclosure, as shown in fig. 3, a second smart device 30 may send a display signal to a first smart device 31 to implement screen projection to the first smart device; for example, the second smart device 30 is a PC, and the first smart device 31 is a tablet computer.
Optionally, the first smart device 31 may include, but is not limited to: a main IC chip 311 and an intermediate IC chip 312. The main IC Chip 311 may be a main processing Chip of the first smart device, such as an SOC Chip (System-on-a-Chip); the intermediate IC chip 312 may be a sound processing chip of the first smart device, which has a sound card function and an HID function.
Illustratively, the main IC chip 311 and the intermediate IC chip 312 may be connected via a Universal Asynchronous Receiver/Transmitter (UART), and the intermediate IC chip 312 may be connected to the second smart device 30 via a Universal Serial Bus (USB).
In this implementation manner, if the main IC chip 311 in the first intelligent device detects the sound state adjustment instruction in the process of the second intelligent device projecting the screen to the first intelligent device, an HID keyboard instruction for instructing the second intelligent device to adjust the sound state to the target state may be sent to the second intelligent device through the intermediate IC chip 312 in the first intelligent device, so that the second intelligent device may recognize the HID keyboard instruction and adjust the sound state to the target state according to the HID keyboard instruction. It should be noted that, when receiving the HID keyboard instruction, the second smart device may consider that the second smart device is a sound state adjustment instruction input by the user through the keyboard of the second smart device, and may adjust the sound state.
It should be noted that the main IC chip 311 in the first smart device may also send a touch signal to the second smart device through the intermediate IC chip 312, so that the second smart device updates the display signal sent to the first smart device according to the touch signal.
In another possible implementation manner, if the first intelligent device casts a screen to the second intelligent device, a sound state synchronization instruction is sent to an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
Fig. 4 is a schematic structural diagram of a system for synchronizing sound states according to an embodiment of the present application, as shown in fig. 4, a first smart device 40 may send a display signal to a second smart device 41 to implement screen projection to the second smart device; for example, the first smart device 40 is a PC, and the second smart device 41 is a tablet computer.
Optionally, the second smart device 41 may include, but is not limited to: a main IC chip 411 and an intermediate IC chip 412. The main IC Chip 411 may be a main processing Chip of a second smart device, such as an SOC Chip (System-on-a-Chip); the intermediate IC chip 412 may be a sound processing chip of the second smart device, which has a sound card function and an HID function.
Illustratively, the main IC chip 411 and the intermediate IC chip 412 may be connected through a UART, and the intermediate IC chip 412 having a sound card function may be connected to the first smart device 40 through a USB (which is equivalent to the intermediate IC chip 412 being a sound card IC of the first smart device 40, so that the first smart device 40 may perform an operation process on the intermediate IC chip 412).
In this implementation, if a sound state adjustment instruction is detected in the process of the first smart device projecting the screen to the second smart device, a sound state synchronization instruction for instructing the second smart device to adjust the sound state to the target state may be sent to the intermediate IC chip 412 in the second smart device, so that the main IC chip 411 in the second smart device may adjust the sound state to the target state according to the sound state synchronization instruction after receiving the sound state synchronization instruction forwarded by the intermediate IC chip 412.
Of course, the sound state synchronization instruction may also be sent to the second smart device by other manners, which is not limited in this embodiment of the application.
In summary, in the embodiment of the present application, when a sound state adjustment instruction is detected in the screen projection process of the first smart device and the second smart device, the sound state of the first smart device is adjusted to the target state according to the sound state adjustment instruction, and a sound state synchronization instruction is sent to the second smart device, so that the second smart device adjusts the sound state to the target state according to the sound state synchronization instruction, and it can be ensured that the sound states of the first smart device and the second smart device are kept synchronized in the screen projection process of the first smart device and the second smart device. Therefore, the synchronization of the sound states of the two intelligent devices in the screen projection process is realized, and the sound states of the two intelligent devices can be controlled equivalently by any intelligent device, so that the operation flexibility of the sound states of the intelligent devices in the screen projection process can be improved, and the watching experience of a user can be improved.
For example, during the process of projecting a screen to a tablet PC by a PC, a user sets the sound state of the tablet PC to a mute state, and then returns to the seat to continue operating the PC.
For another example, in the process of projecting the screen to the tablet PC by the PC, when the sound state of the PC side is adjusted to the mute state, the tablet PC is also adjusted to the mute state accordingly, so that the user watching the tablet PC can know that the sudden absence of sound is caused by turning on the mute state, rather than the absence of sound caused by a fault.
For another example, in the process of projecting the screen to the tablet PC by the PC, when the sound state of the tablet PC is set to the mute state, the PC is also adjusted to the mute state accordingly, so that the PC does not suddenly make a sound after the screen projection is finished.
Fig. 5 is a flowchart illustrating a method for synchronizing sound states according to another embodiment of the present application. On the basis of the above embodiments, the execution subject of the embodiment of the present application may be the second smart device, or a synchronization apparatus of the sound state in the second smart device. The synchronization means of the sound state in the second smart device may be implemented by software and/or hardware, for example. As shown in fig. 5, a method for synchronizing sound states provided by an embodiment of the present application may include:
step S501, in the screen projection process of the first intelligent device and the second intelligent device, receiving a sound state synchronization instruction sent by the first intelligent device.
In this step, in the screen projection process of the first intelligent device and the second intelligent device, a sound state synchronization instruction sent after the sound state of the first intelligent device is adjusted to a target state according to the sound state adjustment instruction when the first intelligent device detects the sound state adjustment instruction can be received; the current sound state of the first smart device is a target state, and the target state may include but is not limited to: and in a mute state or a non-mute state, the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to a target state, so that the sound states of the first intelligent device and the second intelligent device can be kept synchronous.
The following embodiments of the present application describe an implementation manner of receiving the sound state synchronization instruction sent by the first smart device in step S501.
In a possible implementation manner, if in the process of screen projection from the second intelligent device to the first intelligent device, receiving a human interface device HID keyboard instruction sent by an intermediate integrated circuit IC chip in the first intelligent device; and the HID keyboard instruction is used for instructing the second intelligent equipment to adjust the sound state to the target state.
As shown in fig. 3, in this implementation, if the first smart device detects a sound state adjustment instruction in the process of the second smart device projecting the screen to the first smart device, an HID keyboard instruction that is sent by the main IC chip 311 of the first smart device through the intermediate IC chip 312 and is used to instruct the second smart device to adjust the sound state to the target state may be received, so that the second smart device may recognize the HID keyboard instruction and adjust the sound state to the target state according to the HID keyboard instruction.
In another possible implementation manner, if the first intelligent device casts a screen to the second intelligent device, receiving a sound state synchronization instruction sent by the first intelligent device through an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
As shown in fig. 4, in this implementation, if the first smart device detects a sound state adjustment instruction during the process of projecting a screen to the second smart device, the intermediate integrated circuit IC chip 412 in the second smart device may receive a sound state synchronization instruction sent by the first smart device and used for instructing the second smart device to adjust the sound state to the target state, so that the main IC chip 411 in the second smart device may adjust the sound state to the target state according to the sound state synchronization instruction after receiving the sound state synchronization instruction forwarded by the intermediate IC chip 412.
Of course, the sound state synchronization instruction sent by the first smart device may also be received in other manners, which is not limited in this embodiment of the application.
And step S502, adjusting the sound state of the second intelligent device to a target state according to the sound state synchronization instruction.
In this step, the sound state of the second smart device is adjusted to the target state according to the sound state synchronization instruction received in step S501, so that the sound states of the second smart device and the first smart device can be kept synchronized.
Optionally, according to the sound state synchronization instruction, sound state prompt information in the second intelligent device may be further adjusted, so as to remind the user that the sound state of the second intelligent device changes.
For example, if the target state is the non-mute state, the sound state prompt information in the second smart device obtained after the adjustment according to the sound state synchronization instruction is used to indicate that the sound state of the second smart device is the non-mute state, for example, a "mute" icon may be hidden, or "exit from the mute state" may be displayed.
For another example, if the target state is a mute state, the sound state prompt information in the second smart device obtained after the adjustment according to the sound state synchronization instruction is used to indicate that the sound state of the second smart device is a mute state, for example, a "mute" icon may be displayed, or an "open mute state" may be displayed.
In summary, in the embodiment of the application, in the screen projection process of the first intelligent device and the second intelligent device, the sound state synchronization instruction sent by the first intelligent device is received, and the sound state of the second intelligent device is adjusted to the target state according to the sound state synchronization instruction, so that it can be ensured that the sound states of the first intelligent device and the second intelligent device are kept synchronous in the screen projection process of the first intelligent device and the second intelligent device. Therefore, the synchronization of the sound states of the two intelligent devices in the screen projection process is realized, and the sound states of the two intelligent devices can be controlled equivalently by any intelligent device, so that the operation flexibility of the sound states of the intelligent devices in the screen projection process can be improved, and the watching experience of a user can be improved.
Fig. 6 is a flowchart illustrating a method for synchronizing sound states according to another embodiment of the present application. On the basis of the above embodiments, the embodiments of the present application are introduced with reference to a first smart device side and a second smart device side to a method for synchronizing sound states. As shown in fig. 6, a method for synchronizing sound states provided by an embodiment of the present application may include:
step S601, in the screen projection process of the first intelligent device and the second intelligent device, the first intelligent device detects a sound state adjusting instruction.
The sound state adjusting instruction is used for instructing to adjust the sound state of the first intelligent device to be a target state, and the target state can be a mute state or a non-mute state.
Step S602, the first smart device adjusts the sound state of the first smart device to a target state according to the sound state adjustment instruction, and adjusts the sound state prompt information in the first smart device according to the sound state adjustment instruction.
Step S603, the first smart device sends a sound state synchronization instruction to the second smart device.
And the sound state synchronization instruction is used for instructing the second intelligent equipment to adjust the sound state to the target state.
Step S604, the second smart device receives the sound state synchronization instruction sent by the first smart device.
And step S605, the second intelligent device adjusts the sound state of the second intelligent device to a target state according to the sound state synchronization instruction, and adjusts the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
The implementation manner of each step in the embodiment of the present application may refer to the relevant content in the above embodiments of the present application, and is not described herein again.
To sum up, in the embodiment of the present application, when a sound state adjustment instruction is detected in the screen projection process of the first intelligent device and the second intelligent device, the sound state of the first intelligent device is adjusted to the target state and the sound state prompt information in the first intelligent device is adjusted according to the sound state adjustment instruction, and a sound state synchronization instruction is sent to the second intelligent device, so that the second intelligent device adjusts the sound state to the target state and adjusts the sound state prompt information in the second intelligent device according to the sound state synchronization instruction, thereby ensuring that the sound states of the first intelligent device and the second intelligent device are kept synchronous in the screen projection process of the first intelligent device and the second intelligent device, and timely reminding a user that the sound state of the intelligent device has changed.
Fig. 7 is a schematic structural diagram of a device for synchronizing sound states according to an embodiment of the present application. Optionally, the synchronization apparatus for sound states provided in this embodiment of the present application may be applied to the first smart device. As shown in fig. 7, the apparatus for synchronizing sound states provided by the embodiment of the present application may include: a detection module 701, a first adjustment module 702 and a sending module 703.
In the interim, the detecting module 701 is configured to detect a sound state adjustment instruction in the screen projection process of the first smart device and the second smart device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
a first adjusting module 702, configured to adjust the sound state of the first smart device to the target state according to the sound state adjustment instruction;
a sending module 703, configured to send a sound state synchronization instruction to the second smart device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a possible implementation manner, the sending module 703 is configured to:
sending a Human Interface Device (HID) keyboard instruction to the second intelligent device through an intermediate Integrated Circuit (IC) chip in the first intelligent device; wherein the HID keyboard instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a possible implementation manner, the sending module 703 is configured to:
sending the sound state synchronization instruction to an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
In one possible implementation, the apparatus further includes:
and the second adjusting module is used for adjusting the sound state prompt message in the first intelligent device according to the sound state adjusting instruction.
The sound state synchronization apparatus provided in the embodiment of the present application may be used to implement the technical solution on the first smart device side in the embodiment of the sound state synchronization method of the present application, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 8 is a schematic structural diagram of a device for synchronizing sound states according to another embodiment of the present application. Optionally, the synchronization apparatus for sound states provided in this embodiment of the present application may be applied to a second smart device. As shown in fig. 8, the apparatus for synchronizing sound states provided by the embodiment of the present application may include: a receiving module 801 and a first adjusting module 802.
The receiving module 801 is configured to receive a sound state synchronization instruction sent by a first intelligent device in a screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
a first adjusting module 802, configured to adjust the sound state of the second smart device to the target state according to the sound state synchronization instruction.
In a possible implementation manner, the receiving module 801 is specifically configured to:
receiving a human-computer interface device HID keyboard instruction sent by an intermediate integrated circuit IC chip in the first intelligent device; wherein the HID keyboard instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
In a possible implementation manner, the receiving module 802 is specifically configured to:
receiving the sound state synchronization instruction sent by the first intelligent device through an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
In one possible implementation, the apparatus further includes:
and the second adjusting module is used for adjusting the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
The sound state synchronization apparatus provided in the embodiment of the present application may be used to implement the technical solution on the second smart device side in the above sound state synchronization method embodiment of the present application, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 9 is a schematic structural diagram of a first smart device according to an embodiment of the present application. As shown in fig. 9, a first smart device provided in an embodiment of the present application may include: a first processing chip 901 and a second processing chip 902.
Wherein the first processing chip 901 is configured to:
detecting a sound state adjusting instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
the second processing chip 902 is configured to: sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
It should be noted that the first processing chip 901 in the embodiment of the present application may correspond to the main IC chip 311 in fig. 3, and the second processing chip 902 may correspond to the intermediate IC chip 312 in fig. 3.
In a possible implementation manner, the first processing chip 901 is further configured to:
and adjusting the sound state prompt message in the first intelligent device according to the sound state adjustment instruction.
The first smart device provided in the embodiment of the present application may be configured to execute the related technical solution in the first smart device side in the embodiment of the synchronization method for sound states described above in the present application, and the implementation principle and the technical effect are similar, which are not described herein again.
The embodiment of the present application further provides a second smart device, where the second smart device may include: a first processing chip and a second processing chip. It should be noted that the structure of the second smart device may refer to the structure of the first smart device shown in fig. 9.
Wherein the second processing chip is configured to: receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
the first processing chip is configured to: and adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
It should be noted that the first processing chip in the embodiment of the present application may correspond to the main IC chip 411 in fig. 4, and the second processing chip may correspond to the intermediate IC chip 412 in fig. 4.
In one possible implementation, the first processing chip is further configured to:
and adjusting the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
The second smart device provided in the embodiment of the present application may be configured to execute the related technical solution in the embodiment of the method for synchronizing a sound state in the present application, and the implementation principle and the technical effect of the second smart device are similar, which are not described herein again.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. By way of example, the electronic device provided by the embodiment of the present application may include, but is not limited to: a first smart device or a second smart device.
As shown in fig. 10, an electronic device provided in an embodiment of the present application may include: a memory 1001, a processor 1002 and a computer program stored on the memory 1001 and executable on the processor 1002. Illustratively, the electronic device may further include a communication interface 1003 for communicating with other devices, wherein the memory 1001, the processor 1002 and the communication interface 1003 may be connected by a system bus.
When the processor 1002 executes the computer program, it implements a technical scheme of the first smart device side in the foregoing sound state synchronization method embodiment of the present application, or implements a technical scheme of the second smart device side in the foregoing sound state synchronization method embodiment of the present application, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be understood that, when the electronic device in the embodiment of the present application includes a first smart device, the processor 1002, when executing the computer program, implements the technical solution of the embodiment of the synchronization method for sound states in the present application on the first smart device side; or, when the electronic device in the embodiment of the present application includes a second smart device, the processor 1002 executes the computer program to implement the technical solution of the embodiment of the synchronization method for the sound state in the present application on the side of the second smart device.
Alternatively, the Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In an exemplary embodiment, the above electronic device may also be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above methods.
Optionally, the memory may include a high-speed RAM memory, and may further include a non-volatile memory NVM, such as at least one disk memory.
The embodiment of the present application further provides a computer-readable storage medium, where a computer-executable instruction is stored in the computer-readable storage medium, and the computer-executable instruction is used by a processor to implement a technical solution on a first smart device side in the embodiment of the synchronization method for sound states described above in the present application, or a technical solution on a second smart device side in the embodiment of the synchronization method for sound states described above in the present application, and the implementation principle and the technical effect are similar, and are not described herein again.
It should be understood by those of ordinary skill in the art that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic, and should not limit the implementation process of the embodiments of the present application.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (16)

1. A method for synchronizing sound states, the method being applied to a first intelligent device, the method comprising:
detecting a sound state adjusting instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
2. The method of claim 1, wherein sending the voice state synchronization instruction to the second smart device comprises:
sending a Human Interface Device (HID) keyboard instruction to the second intelligent device through an intermediate Integrated Circuit (IC) chip in the first intelligent device; wherein the HID keyboard instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
3. The method of claim 1, wherein sending the voice state synchronization instruction to the second smart device comprises:
sending the sound state synchronization instruction to an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
4. The method according to any one of claims 1-3, further comprising:
and adjusting the sound state prompt message in the first intelligent device according to the sound state adjustment instruction.
5. A method for synchronizing sound states, the method being applied to a second smart device, the method comprising:
receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
and adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
6. The method of claim 5, wherein the receiving the sound state synchronization instruction sent by the first smart device comprises:
receiving a human-computer interface device HID keyboard instruction sent by an intermediate integrated circuit IC chip in the first intelligent device; wherein the HID keyboard instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
7. The method of claim 5, wherein the receiving the sound state synchronization instruction sent by the first smart device comprises:
receiving the sound state synchronization instruction sent by the first intelligent device through an intermediate Integrated Circuit (IC) chip in the second intelligent device; wherein the intermediate IC chip has a sound card function.
8. The method according to any one of claims 5-7, further comprising:
and adjusting the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
9. An apparatus for synchronizing sound states, the apparatus being applied to a first smart device, the apparatus comprising:
the detection module is used for detecting a sound state adjustment instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
the first adjusting module is used for adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
the sending module is used for sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
10. An apparatus for synchronizing sound states, the apparatus being applied to a second smart device, the apparatus comprising:
the receiving module is used for receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
and the first adjusting module is used for adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
11. A first smart device, comprising: a first processing chip and a second processing chip;
wherein the first processing chip is configured to:
detecting a sound state adjusting instruction in the screen projection process of the first intelligent device and the second intelligent device; the sound state adjusting instruction is used for indicating that the sound state of the first intelligent device is adjusted to be a target state, and the target state is a mute state or a non-mute state;
adjusting the sound state of the first intelligent device to the target state according to the sound state adjusting instruction;
the second processing chip is configured to: sending a sound state synchronization instruction to the second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state.
12. The first smart device of claim 11, wherein the first processing chip is further configured to:
and adjusting the sound state prompt message in the first intelligent device according to the sound state adjustment instruction.
13. A second smart device, comprising: a first processing chip and a second processing chip;
wherein the second processing chip is configured to: receiving a sound state synchronization instruction sent by a first intelligent device in the screen projection process of the first intelligent device and a second intelligent device; the sound state synchronization instruction is used for instructing the second intelligent device to adjust the sound state to the target state;
the first processing chip is configured to: and adjusting the sound state of the second intelligent device to the target state according to the sound state synchronization instruction.
14. The second smart device of claim 13, wherein the first processing chip is further configured to:
and adjusting the sound state prompt information in the second intelligent device according to the sound state synchronization instruction.
15. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-4 or the method according to any of claims 5-8 when executing the computer program.
16. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-4 or the method of any one of claims 5-8.
CN202010353843.5A 2020-04-29 2020-04-29 Method, device and equipment for synchronizing sound state and storage medium Pending CN111475133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010353843.5A CN111475133A (en) 2020-04-29 2020-04-29 Method, device and equipment for synchronizing sound state and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010353843.5A CN111475133A (en) 2020-04-29 2020-04-29 Method, device and equipment for synchronizing sound state and storage medium

Publications (1)

Publication Number Publication Date
CN111475133A true CN111475133A (en) 2020-07-31

Family

ID=71761998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010353843.5A Pending CN111475133A (en) 2020-04-29 2020-04-29 Method, device and equipment for synchronizing sound state and storage medium

Country Status (1)

Country Link
CN (1) CN111475133A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905217A (en) * 2021-12-08 2022-01-07 荣耀终端有限公司 Screen projection method, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160057264A1 (en) * 2014-08-25 2016-02-25 Echostar Technologies L.L.C. Wireless mute device and method
CN107995563A (en) * 2012-11-07 2018-05-04 联想(北京)有限公司 A kind of control method and electronic equipment
CN108282677A (en) * 2018-01-24 2018-07-13 上海哇嗨网络科技有限公司 Realize that content throws method, throwing screen device and the system of screen by client
CN109032555A (en) * 2018-07-06 2018-12-18 广州视源电子科技股份有限公司 Throw screen sound intermediate frequency data processing method, device, storage medium and electronic equipment
CN109118847A (en) * 2018-07-20 2019-01-01 深圳点猫科技有限公司 A kind of the classroom interaction throwing screen method and electronic equipment of Linux system level
CN109275130A (en) * 2018-09-13 2019-01-25 锐捷网络股份有限公司 A kind of throwing screen method, apparatus and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995563A (en) * 2012-11-07 2018-05-04 联想(北京)有限公司 A kind of control method and electronic equipment
US20160057264A1 (en) * 2014-08-25 2016-02-25 Echostar Technologies L.L.C. Wireless mute device and method
CN108282677A (en) * 2018-01-24 2018-07-13 上海哇嗨网络科技有限公司 Realize that content throws method, throwing screen device and the system of screen by client
CN109032555A (en) * 2018-07-06 2018-12-18 广州视源电子科技股份有限公司 Throw screen sound intermediate frequency data processing method, device, storage medium and electronic equipment
CN109118847A (en) * 2018-07-20 2019-01-01 深圳点猫科技有限公司 A kind of the classroom interaction throwing screen method and electronic equipment of Linux system level
CN109275130A (en) * 2018-09-13 2019-01-25 锐捷网络股份有限公司 A kind of throwing screen method, apparatus and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905217A (en) * 2021-12-08 2022-01-07 荣耀终端有限公司 Screen projection method, equipment and storage medium
CN113905217B (en) * 2021-12-08 2022-04-01 荣耀终端有限公司 Screen projection method, equipment and storage medium
CN114513645A (en) * 2021-12-08 2022-05-17 荣耀终端有限公司 Screen projection method, equipment and storage medium

Similar Documents

Publication Publication Date Title
US11269575B2 (en) Devices, methods, and graphical user interfaces for wireless pairing with peripheral devices and displaying status information concerning the peripheral devices
US10394443B2 (en) Method for viewing message and user terminal
US20220276779A1 (en) Devices, methods, and graphical user interfaces for providing control of a touch-based user interface absent physical touch capabilities
US10642574B2 (en) Device, method, and graphical user interface for outputting captions
CN104360900B (en) Method for operating multiple operating systems, corresponding system and mobile device
US20120032891A1 (en) Device, Method, and Graphical User Interface with Enhanced Touch Targeting
KR20210042863A (en) Time synchronization method and apparatus for vehicle, device and storage medium
US10353550B2 (en) Device, method, and graphical user interface for media playback in an accessibility mode
US9804771B2 (en) Device, method, and computer readable medium for establishing an impromptu network
US10628025B2 (en) Device, method, and graphical user interface for generating haptic feedback for user interface elements
WO2015138409A1 (en) Selectively redirecting notifications to a wearable computing device
US20170168705A1 (en) Method and electronic device for adjusting video progress
US20200379946A1 (en) Device, method, and graphical user interface for migrating data to a first device during a new device set-up workflow
JP2020500352A (en) Information display method, terminal, and storage medium
US20180324703A1 (en) Systems and methods to place digital assistant in sleep mode for period of time
US11567658B2 (en) Devices and methods for processing inputs using gesture recognizers
EP2835724A1 (en) Control method and input device of touchscreen terminal
US20170357568A1 (en) Device, Method, and Graphical User Interface for Debugging Accessibility Information of an Application
WO2022156603A1 (en) Message processing method and apparatus, and electronic device
CN111475133A (en) Method, device and equipment for synchronizing sound state and storage medium
JP2021527901A (en) Volume display method, device, terminal device and storage medium
AU2015383793A1 (en) Fingerprint event processing method, apparatus, and terminal
CN112637409B (en) Content output method and device and electronic equipment
CN111124240B (en) Control method and wearable device
CN102214056A (en) User interface, system and method for setting time by towing pointer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination