WO2022179273A1 - 一种分布式音频播放方法及电子设备 - Google Patents

一种分布式音频播放方法及电子设备 Download PDF

Info

Publication number
WO2022179273A1
WO2022179273A1 PCT/CN2021/140181 CN2021140181W WO2022179273A1 WO 2022179273 A1 WO2022179273 A1 WO 2022179273A1 CN 2021140181 W CN2021140181 W CN 2021140181W WO 2022179273 A1 WO2022179273 A1 WO 2022179273A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
window
interface
application window
application
Prior art date
Application number
PCT/CN2021/140181
Other languages
English (en)
French (fr)
Inventor
董刚刚
李斌飞
李洪江
宋孟
庄雄
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US18/547,985 priority Critical patent/US20240126505A1/en
Priority to EP21927697.9A priority patent/EP4280042A1/en
Publication of WO2022179273A1 publication Critical patent/WO2022179273A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4396Processing of audio elementary streams by muting the audio signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a distributed audio playback method and electronic device.
  • Multi-device distributed display With the development of terminal technology and display technology, multi-device distributed display brings more and more convenience to people's lives.
  • Multi-device distributed display refers to the realization of interface display through multiple electronic devices.
  • the extended display of the application interface can be realized through multiple electronic devices.
  • FIG. 1 shows a scenario in which device B and device A display an interface distributed across devices in a splicing manner.
  • the interface a1 is displayed on the display screen of the device A
  • the interface a2 is displayed on the display screen of the device B
  • the interface a1 and the interface a2 are the two parts that make up the interface a from a geometrical point of view.
  • the interface a displayed on the device A is extended and displayed on the device B, wherein, after the extended display, the interface a is transferred and displayed on the device B by the device A.
  • the display process of the interface on device A and device B usually includes the scene shown in (a) in Figure 1 ⁇ Figure 1 The scene shown in (b).
  • device A is usually referred to as a master device
  • device B is usually referred to as an extension device or a secondary device.
  • Embodiments of the present application provide a distributed audio playback method and electronic device, which can be applied to a first device and a second device.
  • the audio corresponding to the window in the first device can be adaptively placed in the first device according to the position of the window.
  • Distribute and play between a device and a second device improves the user's audio experience when using multiple devices.
  • the present application provides a distributed audio playback method.
  • a first device displays a first application window, and the first device plays a first audio corresponding to the first application window.
  • the first device displays the first part of the first application window
  • the second device receives the video data sent by the first device, displays the second part of the first application window according to the video data, and the first device plays The first audio
  • the second device plays the first audio.
  • the second device receives the video data sent by the first device, displays the first application window according to the video data, and at this time the second device plays the first audio.
  • the first device monitors the position of the first application window in the display area corresponding to the first device, and determines whether to send video data to the second device according to the position of the first application window.
  • video data is not sent to the second device.
  • the first part of the first application window is in the display area corresponding to the first device, and the second part of the first application window is in the display area corresponding to the second device, the video data is sent to the second device.
  • the first application window is in the display area corresponding to the second device, the video data is sent to the second device.
  • the first device when the audio state of the first application window is the playing state, the first device sends the first audio to the second device.
  • the audio state of the first application window is at least one of pause playback, mute, and exit playback state, the first audio is not sent to the second device.
  • the first device and the second device play the first audio at the same volume.
  • the first device plays the first audio at a first volume
  • the second device plays the first audio at a second volume
  • the first volume and the second volume are the positions of the first device according to the first application window owned.
  • the first device acquires the audio stream associated with the first application window from the audio session list according to the identifier of the first application window, and generates the first audio according to the audio stream associated with the first application window.
  • the identifier of the first application window includes at least one of a window identifier, a browser identifier and a path.
  • the first application corresponding to the first application window runs on the first device.
  • the first user operation and the second user operation are operations of dragging the first application window.
  • the first device is a laptop computer and the second device is a tablet computer.
  • the present application provides an electronic device, comprising: one or more processors and one or more memories; wherein the one or more memories are coupled with the one or more processors, and the one or more memories are used for
  • the computer program code is stored, and the computer program code includes computer instructions.
  • the terminal device executes a distributed audio playback method.
  • the present application provides a computer storage medium, where the computer storage medium stores a computer program, the computer program includes program instructions, and the program instructions, when executed by a processor, execute a distributed audio playback method.
  • the distributed audio playback method provided by the embodiment of the present application can automatically manage the audio corresponding to the window according to the position of the window when the user uses multiple electronic devices, without the need for the user to manually switch, which improves the performance in the case of multiple devices. audio experience.
  • FIG. 1 is an example diagram of two extended display scenarios provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • FIG. 4 is an example diagram of a distributed audio playback framework provided by an embodiment of the present application.
  • 5A is an example diagram of a scene in which a first device plays a video according to an embodiment of the present application
  • 5B is an example diagram of a scenario in which a first device extends a video playback interface to a second device according to an embodiment of the present application;
  • 5C is an example diagram of another scenario in which a first device extends a video playback interface to a second device according to an embodiment of the present application;
  • FIG. 6 is a flowchart of a distributed audio playback method provided by an embodiment of the present application.
  • FIG. 7 is a logical block diagram 1 of an internal implementation of a first device in a distributed audio playback process provided by an embodiment of the present application;
  • FIG. 8 is a schematic diagram of a method for determining a first part and a second part of an application interface provided by an embodiment of the present application
  • FIG. 9 is a second internal implementation logic block diagram of the first device in a distributed audio playback process provided by an embodiment of the present application.
  • FIG. 10 is a logical block diagram 1 of an internal implementation of an audio playback module provided by an embodiment of the present application.
  • FIG. 11 is a second logical block diagram of an internal implementation of an audio playback module provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • the embodiment of the present application provides a distributed audio playback method, and the method is applied in the process of multi-device distributed display.
  • the method can be applied to the distributed extended display scenario of device A (ie, the first device) and device B (ie, the second device) shown in (a) or (b) of FIG. 1 .
  • a communication connection is established between the first device and the second device in this application.
  • the first device and the second device may implement information transmission between the first device and the second device by means of the established communication connection.
  • the information transmitted between the first device and the second device includes, but is not limited to, application interface configuration parameters, video data, audio data, and control instructions.
  • the first device and the second device can be connected by "touching”, “swiping” (such as scanning a QR code or barcode), “proximity automatic discovery” (such as by means of Bluetooth (BT) or wireless Fidelity (wireless fidelity, WiFi)) and other methods to establish a wireless communication connection.
  • the first device and the second device may follow a wireless transmission protocol, and transmit information through a wireless connection transceiver.
  • the wireless transmission protocol may include, but is not limited to, a Bluetooth transmission protocol or a WiFi transmission protocol.
  • the WiFi transport protocol may be the WiFi P2P transport protocol.
  • the wireless connection transceiver includes but is not limited to Bluetooth, WiFi and other transceivers. Information transmission is implemented between the first device and the second device through the established wireless communication connection.
  • a wired communication connection may be established between the first device and the second device.
  • the first device and the second device are connected through a video image adapter (video graphics array, VGA), digital video interface (digital visual interface, DVI), high definition multimedia interface (high definition multimedia interface, HDMI) or data transmission line etc. to establish a wired communication connection.
  • Information transmission is implemented between the first device and the second device through the established wired communication connection.
  • the present application does not limit the specific connection manner between the first device and the second device.
  • Electronic devices include one or more display screens.
  • the electronic device may be a smartphone, a netbook, a tablet computer, a smart camera, a palmtop computer, a personal digital assistant (PDA), a portable multimedia player (PMP), an augmented reality (augmented reality, AR)/virtual reality (VR) devices, laptops, personal computers (PCs), ultra-mobile personal computers (UMPCs), etc.
  • the electronic device may also be an electronic device of other types or structures including a display screen, which is not limited in this application.
  • FIG. 2 shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application by taking a smartphone as an example.
  • the electronic device may include a processor 210 , a memory (including an external memory interface 220 and an internal memory 221 ), a universal serial bus (USB) interface 230 , a charging management module 240 , and a power management module 241 , battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, button 290, motor 291, indicator 292 , camera 293, display screen 294, and user identification module (subscriber identification module, SIM) card interface 295 and so on.
  • SIM subscriber identification module
  • the sensor module 280 may include a touch sensor 280A and a fingerprint sensor 280B. Further, in some embodiments, the sensor module 280 may further include a gyroscope sensor, an acceleration sensor, a magnetic sensor, a pressure sensor, an air pressure sensor, a distance sensor, a proximity light sensor, a temperature sensor, an ambient light sensor or a bone conduction sensor, etc. one or more of.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device.
  • the electronic device may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • Processor 210 may include one or more processing units.
  • the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video Codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller a video Codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the processor 210 may be the nerve center and command center of the electronic device.
  • the processor 210 may complete instruction fetching according to the instruction, generate an operation control signal, and then execute the control of the instruction.
  • the processor 210 may be configured to control the audio module 270 to collect audio data corresponding to the dragged interface window, make audio switching decisions, and control the dragged interface The output of the audio data corresponding to the window to the corresponding device, etc.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is cache memory.
  • the memory may hold instructions or data that have just been used or recycled by the processor 210 . If the processor 210 needs to use the instruction or data again, it can be called directly from memory. Repeated accesses are avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
  • the processor 210 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the charging management module 240 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 240 may receive charging input from the wired charger through the USB interface 230 .
  • the charging management module 240 may receive wireless charging input through a wireless charging coil of the electronic device. While the charging management module 240 charges the battery 242 , the power management module 241 can also supply power to the electronic device.
  • the power management module 241 is used to connect the battery 242 , the charging management module 240 and the processor 210 .
  • the power management module 241 receives input from the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 294, the camera 293, and the wireless communication module 260.
  • the power management module 241 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 241 may also be provided in the processor 210 .
  • the power management module 241 and the charging management module 240 may also be provided in the same device.
  • the wireless communication function of the electronic device can be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in an electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 250 can provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the electronic device.
  • the mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 250 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 250 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 250 may be provided in the processor 210 .
  • at least part of the functional modules of the mobile communication module 250 may be provided in the same device as at least part of the modules of the processor 210 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 270A, the receiver 270B, etc.), or displays images or videos through the display screen 294 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 210, and may be provided in the same device as the mobile communication module 250 or other functional modules.
  • the wireless communication module 260 can provide applications on electronic devices including wireless local area networks (WLAN) (such as WiFi networks), Bluetooth BT, global navigation satellite system (GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 260 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 260 receives electromagnetic waves via the antenna 2 , modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210 .
  • the wireless communication module 260 can also receive the signal to be sent from the processor 210 , perform frequency modulation on the signal, amplify the signal, and then convert it into an electromagnetic wave for radiation through the antenna 2 .
  • the antenna 1 of the electronic device is coupled with the mobile communication module 250, and the antenna 2 is coupled with the wireless communication module 260, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi-zenith) satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the first device and the second device may implement information transmission between the first device and the second device based on a wireless communication technology by means of respective antennas and mobile communication modules.
  • the electronic device realizes the display function through the GPU, the display screen 294, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 294 is used to display images, videos, and the like.
  • Display screen 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • the electronic device may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the GPU may be used to render the application interface
  • the display screen 294 may be used to display the application interface rendered by the GPU.
  • the GPU of the first device can be used to render the interface a before extending the display to the second device
  • the display screen of the first device can be used to display the GPU Rendered interface a.
  • the GPU of the first device can be used to render the interface a before extending the display to the second device, and split the interface a into the interface a1 and the interface a2. Display the interface a2 on the display screen of one device, and display the interface a2 on the display screen of the second device.
  • the electronic device can realize the shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294 and the application processor.
  • the external memory interface 220 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 210 through the external memory interface 220 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 221 may be used to store computer executable program code, which includes instructions.
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device.
  • the internal memory 221 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 210 executes various functional applications and data processing of the portable device by executing the instructions stored in the internal memory 221 and/or the instructions stored in the memory provided in the processor.
  • Touch sensor 280A may be referred to as a "touch panel.”
  • the touch sensor 280A may be disposed on the display screen 294, and the touch sensor 280A and the display screen 294 form a touch screen, also called a "touch screen”.
  • the touch sensor 280A is used to detect touch operations on or near it.
  • the touch sensor 280A may communicate the detected touch operation to the application processor to determine the type of touch event.
  • the electronic device may provide visual output, etc., related to touch operations through the display screen 294 .
  • the touch sensor 280A may also be disposed on the surface of the electronic device, which is different from the location where the display screen 294 is located.
  • the touch sensor of the first device may be used to detect the user's drag operation on the interface window on the display screen of the first device.
  • the fingerprint sensor 280B is used to collect fingerprints. Electronic devices can use the collected fingerprint characteristics to unlock fingerprints, access application locks, take photos with fingerprints, and answer incoming calls with fingerprints.
  • the electronic device can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an application processor, and the like. Such as music playback, recording, etc.
  • audio module 270 a speaker 270A, a receiver 270B, a microphone 270C, an application processor, and the like.
  • audio module 270 a speaker 270A, a receiver 270B, a microphone 270C, an application processor, and the like.
  • the specific working principles and functions of the audio module 270, the speaker 270A, the receiver 270B and the microphone 270C reference may be made to the introduction in the conventional technology.
  • the audio module 270 may be used to collect audio data corresponding to the dragged window.
  • the speaker 270A can be used to output audio data corresponding to the dragged window.
  • the microphone 270C can be used to collect sounds in the environment, for example, to collect the user's voice during a video call.
  • the keys 290 include a power-on key, a volume key, and the like. Keys 290 may be mechanical keys. It can also be a touch key.
  • the electronic device may receive key input and generate key signal input related to user settings and function control of the electronic device.
  • the hardware modules included in the electronic device shown in FIG. 2 are only illustratively described, and do not limit the specific structure of the electronic device.
  • the electronic device may also include other functional modules.
  • the operating system of the electronic device may include but is not limited to Symbian android apple blackberries Operating systems such as Harmony are not limited in this application.
  • FIG. 3 takes the Android operating system as an example to specifically introduce a schematic diagram of a software structure of an electronic device in an embodiment of the present application.
  • the Android operating system may include an application layer, an application framework layer (framework, FWK), a system library, an Android runtime and a kernel layer (kernel).
  • an application layer an application framework layer (framework, FWK)
  • FWK application framework layer
  • system library an application framework layer
  • Android runtime an Android runtime and a kernel layer (kernel).
  • kernel layer kernel layer
  • the application layer can provide some core applications.
  • the application program is simply referred to as the application below.
  • Applications in the application layer may include native applications (such as applications installed in the electronic device when the operating system is installed before the electronic device leaves the factory), such as the camera, map, music, short message, gallery, electronic device shown in Figure 3. Mail, Contacts, Bluetooth, etc.
  • Applications in the application layer can also include third-party applications (such as applications downloaded and installed by users through the application store), such as WeChat shown in Figure 3 and video applications.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions. As shown in Figure 3, the application framework layer may include a window manager service (WMS), an activity management server (activity manager service, AMS), an input event management server (input manager service, IMS), a resource manager, Notification manager and view system etc.
  • WMS window manager service
  • AMS activity management server
  • IMS input event management server
  • resource manager Notification manager and view system etc.
  • WMS is mainly used to manage window programs.
  • the window management server can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • AMS is mainly used to manage activities, and is responsible for the startup, switching, scheduling of various components in the system, and the management and scheduling of applications.
  • IMS is mainly used to translate, encapsulate, etc. the original input events, get input events containing more information, and send them to WMS.
  • the WMS stores the clickable areas (such as controls) of each application and the position of the focus window. information, etc. Therefore, WMS can correctly dispatch input events to the specified control or focus window.
  • the WMS may be used to distribute the received window drag event to a specified control or focus window.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems are primarily used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the view system may be used to construct text controls, picture controls, etc. on the application interface displayed on the electronic device.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the notification manager may be configured to notify the user that the display is being expanded when the first device detects an operation of the user dragging the interface window to the second device. Further, the notification manager can also be used to notify the user that the extended display has been completed when the dragged interface window is completely extended from the first device to the second device.
  • the resource manager is mainly used to provide various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the resource manager may be used to provide localized strings, icons, pictures, layout files or video files, etc. for the application interface displayed on the electronic device.
  • the system library and Android runtime include the functions that FWK needs to call, the Android core library, and the Android virtual machine.
  • a system library can include multiple functional modules. For example: surface manager, 3D graphics processing library, 2D graphics engine and media library, etc.
  • the kernel layer is the foundation of the Android operating system, and the final functions of the Android operating system are completed through the kernel layer.
  • the kernel layer can include display drivers, input/output device drivers (eg, keyboard, touch screen, headset, speaker, microphone, etc.), device nodes, Bluetooth drivers, camera drivers, audio drivers, and sensor drivers.
  • input/output device drivers eg, keyboard, touch screen, headset, speaker, microphone, etc.
  • device nodes e.g., keyboard, touch screen, headset, speaker, microphone, etc.
  • Bluetooth drivers e.g., Bluetooth drivers, camera drivers, audio drivers, and sensor drivers.
  • the user performs input operations through the input device, and the kernel layer can generate corresponding original input events according to the input operations and store them in the device node.
  • the sensor driver can detect the user's sliding operation on the display screen of the electronic device, and when the sliding operation is a drag operation on a certain interface window, the sensor driver reports to the IMS of the application framework layer The window drag event.
  • the WMS receives the drag event reported by the IMS, it will distribute the window drag event to the corresponding application, so as to trigger the movement of the application interface window on the interface of the electronic device with the user's drag.
  • FIG. 3 only takes the Android system with a layered architecture as an example to introduce a software structure of an electronic device.
  • the present application does not limit the specific architecture of the software system of the electronic device (including the first device and the second device), and for the specific introduction of the software system of other architectures, reference may be made to conventional technologies.
  • the distributed audio playback method provided by the embodiments of the present application is applied in the process of multi-device distributed display.
  • the distributed display of multiple devices in this embodiment of the present application may be implemented based on the user dragging the application window. For example, based on the user dragging the window of the interface (such as the interface a shown in (b) in 1) displayed on the first device (the device A shown in (b) in FIG. In the operation of the second device, such as device B) shown in (b) in FIG. 1 , the dragged interface on the first device is extended and displayed to the second device.
  • the first part of the interface A (as shown in FIG. 1 ( The interface a1) shown in a) is displayed on the display screen of the first device (ie, device A), and the second part of the interface a (the interface a2 shown in (a) in FIG. 1 ) is displayed on the second device ( i.e. on the display of device B).
  • the distributed audio playback method can implement adaptive switching of audio according to the distributed display situation of the interface. Specifically, when the user has not dragged the interface window or has not yet dragged the interface window out of the display screen range of the first device, only the audio corresponding to the interface is played by the first device.
  • the audio corresponding to the interface is switched to the second device, and the second device and the first device are distributed and synchronously played the interface corresponding audio.
  • the audio corresponding to the interface is switched from the first device to the second device, and the first device no longer plays the audio corresponding to the interface.
  • the first device can display the interface according to the first part of the interface (the interface a1 shown in (a) in FIG. 1 ) ) and the ratio of the second part (interface a2 shown in (a) in Figure 1) on the display screen of the first device and the second device respectively, adaptively adjust the distributed synchronous playback interface of the first device and the second device When the corresponding audio is used, the size of the respective volume.
  • the distributed audio playback method provided by the embodiment of the present application may be implemented based on the framework shown in FIG. 4 .
  • the distributed audio playback framework 400 provided by this embodiment of the present application may include a monitoring module 410 , an audio output control module 420 , a virtual audio module 430 , and an audio playback module 440 .
  • the monitoring module 410 is mainly used to monitor the positions of all interface windows displayed on the first device, so as to timely detect a window drag event in which the application window is dragged by the user to the second device. And, when detecting a window drag event in which the application window is dragged by the user to the second device, send the window drag event to the audio output control module 420, so that the audio output control module 420 intercepts (eg hooks) the window drag event, the virtual audio module 430 is triggered to collect audio data corresponding to the dragged interface window according to the window drag event.
  • the monitoring module 410 may only detect the position of the focus window, that is, the position of the window that the user operated last time, or the position of the foreground window.
  • interception refers to intercepting and monitoring the transmission of events before they are transmitted to the destination. For example, before the event is transmitted to the end point, it can hook the event like a hook, process the event in time, and respond to the event in time.
  • the audio output control module 420 may respond to the window drag event as soon as the window drag event is intercepted.
  • the monitoring module 410 is also used to monitor the audio states corresponding to all the interfaces accompanied with audio information displayed on the first device, such as playing, pausing, muting, or quitting, etc., so that the audio can be determined according to the specific audio state.
  • a specific strategy for distributed playback on the first device and the second device For example, in the scene shown in (b) of FIG. 1, when the audio state is playing, the first device switches the audio corresponding to the interface from the first device to the second device to play; when the audio state is mute or paused playback Or when exiting playback, give up switching the audio corresponding to the interface from the first device to the second device.
  • a part of the monitoring module 410 may be located in the sensor module of the electronic device (the sensor module 280 shown in FIG. 2 ), and a part of the monitoring module 410 may be located in the audio module of the electronic device (the audio module 270 shown in FIG. 2 ) )middle.
  • the functions of the monitoring module 410 may be partially implemented by the sensor module of the electronic device (the sensor module 280 shown in FIG. 2 ) and partially implemented by the audio module of the electronic device (the audio module 270 shown in FIG. 2 ).
  • the sensor module of the electronic device is used to monitor the positions of all interface windows displayed on the first device and send the audio output control module 420 to the audio output control module 420 when detecting a window drag event in which the application window is dragged by the user to the second device. Window drag event.
  • the audio module of the electronic device is used to monitor the audio states corresponding to all interfaces accompanied with audio information displayed on the first device, such as playing, pausing, muting, or exiting.
  • the monitoring module 410 may be located in the processor of the electronic device (the processor 210 shown in FIG. 2 ).
  • the monitoring module 410 of the first device may receive a window drag event detected by the sensor module of the first device (the sensor module 280 shown in FIG. 2 ) that the application window is dragged by the user to the second device, and send the window drag event to the second device.
  • the window drag event is sent to the audio output control module 420 .
  • the monitoring module 410 of the first device may also receive audio states corresponding to all interfaces accompanied with audio information displayed on the first device and detected by the audio module (the audio module 270 shown in FIG. 2 ) of the first device.
  • the present application does not limit the specific setting manner of the monitoring module 410 in the electronic device (eg, the first device).
  • the audio output control module 420 is mainly used to intercept (eg hook) the window drag event from the monitoring module 410 when the monitoring module 410 detects that the user drags the window of the application interface on the first device to the second device.
  • the window drag event triggers the virtual audio module 430 to collect audio data corresponding to the dragged interface window.
  • the window drag event also carries information such as the position to which the window is dragged.
  • the audio output control module 420 is also used to make audio switching decisions according to the position to which the window is dragged, such as whether to switch the audio from the first device to the second device, and how to switch the audio (such as distributed synchronous playback or complete switching). Wait).
  • the audio output control module 420 is further configured to control the switching of the audio data corresponding to the dragged interface window between the first device and the second device after determining how to switch the audio, for example, to control the audio corresponding to the dragged interface window Data output to the corresponding device.
  • the audio output control module 420 is further configured to control the volume of the audio corresponding to the dragged interface window played on the first device and/or the second device. For example, by sending a control instruction to the first device and/or the second device, the volume of the audio corresponding to the dragged interface window played on the first device and/or the second device is controlled.
  • the audio output control module 420 may be located in a processor of the electronic device (such as the processor 210 shown in FIG. 2 ).
  • the frequency output control module 420 may be located in a controller in the processor 210 .
  • the audio output control module 420 may be located in the processor 210 independently of the controller in the processor 210 shown in FIG. 2 . This application does not limit the specific setting manner of the audio output control module 420 in the electronic device (eg, the first device).
  • the virtual audio module 430 is mainly used to collect the audio data corresponding to the interface when the monitoring module 410 detects that the interface window on the first device is being dragged by the user to the second device, so that the audio output control module 420 can control the audio data according to the window being dragged. After the audio switching decision is made at the position moved to, the audio data corresponding to the dragged interface window collected by the virtual audio module 430 is controlled to be output to the corresponding device.
  • the virtual audio module may be located in an audio module of the electronic device (the audio module 270 shown in FIG. 2 ).
  • the audio module 270 shown in FIG. 2 may include a first audio module and a second audio module.
  • the first audio module is a conventional audio module
  • the second audio module is the virtual audio module 430 .
  • the virtual audio module may also be independent of the audio module 270 shown in FIG. 2 .
  • a virtual audio module 430 needs to be added to the hardware structure diagram shown in FIG. 2 .
  • the audio module 270 shown in FIG. 2 is a conventional audio module. This application does not limit the specific setting manner of the virtual audio module in the electronic device (eg, the first device).
  • the conventional audio module is mainly used to input the audio data corresponding to the dragged interface window to the audio playback module 440 of the first device for playback, and adjust the audio playback module 440 to play the dragged interface window according to the instruction of the audio output control module 420 The volume of the corresponding audio data. Further, the conventional audio module can also be used to input the audio data corresponding to the interface window displayed on the first device that is not dragged into the audio playback module 440 of the first device for playback.
  • the audio playback module 440 is mainly used for receiving audio data from conventional audio modules and playing audio.
  • the audio data from the conventional audio module may be audio data corresponding to the dragged interface window, or may be audio data corresponding to the interface window displayed on the first device that is not dragged.
  • the audio playback module 440 is further configured to control the audio playback volume correspondingly according to the volume control instruction of the conventional audio module.
  • the audio playback module 440 may be the speaker 270A shown in FIG. 2 .
  • FIG. 4 is only an example of a distributed audio playback framework, in which the monitoring module 410, the audio output control module 420, the virtual audio module 430 and the audio playback module 440 are only used as a possible module division.
  • the distributed audio playback framework in the application embodiment may have more or less modules than those shown in FIG. 4 , may combine two or more modules, or may have different module divisions.
  • the various modules shown in Figure 4 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
  • the following will take the first device as a notebook computer, the second device as a tablet computer, and the dragged interface window as a video application window, and take the extended scene from Fig. 5A ⁇ Fig. 5B ⁇ Fig. 5C state changes as an example.
  • a distributed audio playback method provided by the embodiment of the application is introduced in detail.
  • the first device displays a video application window 501 shown in FIG. 5A , wherein the video application window 501 includes a video playback interface 502 .
  • the first device responds to the user from the video playback state shown in FIG. 5A, by dragging the video application window 501 to the operation of the extended display state shown in FIG. 5B, the first device and the second device are distributed across the devices by splicing.
  • a video playback interface 502 is displayed.
  • the audio corresponding to the video playback interface 502 is controlled by the first device to be distributed and synchronously played on the first device and the second device.
  • the first device transfers the video playback interface 502 to the first device by dragging the video application window 501 to the extended display state shown in FIG. 5C.
  • Second equipment wherein, in the extended display state shown in FIG. 5C , the audio corresponding to the video playback interface 502 controlled by the first device is completely switched from the first device to the second device.
  • the first device can also respond to the user from the video playing state shown in FIG. 5A , by dragging the video application window 501 to the extended display state shown in FIG. 5B .
  • the second device plays the audio corresponding to the video playing interface 502 in a distributed and synchronous manner, it controls the volume when the first device and the second device play the audio corresponding to the video playing interface 502 respectively.
  • a distributed audio playback method may include the following steps S601-S606:
  • the first device displays the video playing interface 502, and synchronously plays the audio corresponding to the video playing interface 502.
  • the video playing interface 502 is located in the video application window 501 .
  • the first device displays the video playback interface 502 , and synchronously playing the audio corresponding to the video playback interface 502 specifically means that the first device plays the audio corresponding to the video while playing the video in the video playback interface 502 .
  • the first device may use a conventional method to play the video and the audio corresponding to the video in the video playback interface 502 .
  • the first device can obtain the video data corresponding to the video playback interface 502 from the video application through the video driver of the first device, and send it to the display screen to play the video through the display screen; synchronously, the first device can use the first device
  • the conventional audio driver (such as the audio driver corresponding to the conventional audio module described above) acquires the audio data corresponding to the video playback interface 502 from the video application, and plays the audio through the speaker.
  • the first device acquires real-time location information of the video application window 501 when detecting an operation of dragging the video application window 501 by the user.
  • the first device may detect a user's sliding operation on the display screen of the first device through sensor driving.
  • the above sliding operation is a drag operation on a certain interface window (such as the video application window 501 shown in FIG. 5A )
  • the first device acquires real-time location information of the video application window 501 .
  • the real-time position information of the video application window 501 is the real-time coordinate information of the video application window 501 in the preset coordinate system.
  • the preset coordinate system may be a preset coordinate system of the first device, a world coordinate system, or a ground coordinate system, etc., which is not limited in this application.
  • the preset coordinate system of the first device may be a two-dimensional coordinate system corresponding to the display screen of the first device.
  • the two-dimensional coordinate system may be the notebook computer in the state shown in FIG. 5A , the lower left corner of the notebook computer is the coordinate origin O, and the short side of the lower side is the x-axis, The long side on the left is the coordinate system composed of the y-axis.
  • the first device during the process of the video application window 501 being dragged by the user, the first device will continuously acquire the real-time location information of the video application window 501 .
  • the first device can accurately acquire the timing when the first device extends the display of the video playback interface 502 to the second device, so that it can respond in time.
  • the first device when the first device detects the user's operation of dragging the video application window 501, the first device can monitor the position of the video application window 501 in real time through the monitoring module 410, so that the video application can be acquired in real time through sensor driving Location information of the window 501 .
  • the monitoring module 410 of the first device monitors the sensor drive in real time, and intercepts (eg hooks) the window drag event in time. As shown in FIG. 7 , after monitoring the window drag event, the monitoring module 410 intercepts the window drag event. And, exemplarily, during the process of the window being dragged, the monitoring module 410 acquires the window process information of the dragged window (eg, the video application window 501 ) in real time.
  • the window process information includes position information (position) of the window.
  • the monitoring module 410 can acquire the following position information of the dragged window (such as the video application window 501 ) in real time through sensor driving: the coordinates of the upper left corner of the dragged window in the preset coordinate system (such as left_up_x, left_up_y), Drag the coordinates of the lower right corner of the window in the preset coordinate system (eg right_down_x, right_down_y).
  • the coordinates of the upper left corner of the dragged window in the preset coordinate system such as left_up_x, left_up_y
  • Drag the coordinates of the lower right corner of the window in the preset coordinate system eg right_down_x, right_down_y.
  • the monitoring module 410 can acquire the following position information of the dragged window (such as the video application window 501) in real time through the sensor drive: the coordinates of the lower left corner of the dragged window in the preset coordinate system (such as left_down_x, left_down_y), The coordinates of the upper right corner of the dragged window in the preset coordinate system (eg right_up_x, right_up_y).
  • the window process information of the dragged window may also include information such as a window identifier (eg, pid), a browser identifier (eg, parentId), and a path (eg, path).
  • a window identifier eg, pid
  • a browser identifier eg, parentId
  • a path eg, path
  • the monitoring module 410 can exemplarily obtain the window identifier (eg pid), browser identifier (eg parentId) and path (eg path) of the dragged window ) and other window process information.
  • the browser identifier can be used for the behavior of playing audio when there are multiple tabs in the browser, and filtering out the tab that plays the audio from the multiple tabs.
  • the first device acquires the audio state corresponding to the video playback interface 502, and determines the first part and the first part of the video playback interface 502 according to the real-time position of the video application window 501. part two.
  • the first device detects that the video application window 501 crosses the edge of the display screen of the first device, which means that some windows of the video application window 501 cross the edge of the display area corresponding to the display screen of the first device.
  • a part of the window 501 is displayed in the display area corresponding to the display screen of the second device, wherein the display area corresponding to the display screen of the second device is also performed on the first device for drawing and rendering operations, which can be the virtual display area set by the first device. , that is, not displayed on the first device, but can be used for application drawing and rendering, etc.
  • the audio state corresponding to the video playback interface 502 may include, but is not limited to, any of the following: playing, paused, muted, or quit playing.
  • the first device may monitor the audio state corresponding to the video playback interface 502 in real time through the monitoring module 410, so as to update the state corresponding to the audio session in real time when the audio state changes.
  • the monitoring module 410 of the first device monitors the session notification unit in real time to obtain the audio state corresponding to the interface in the dragged window, as shown in FIG. 7 being playing (start), Pause playback (pause/stop), mute (mute), and exit playback (exit).
  • the monitoring module 410 maintains an audio session list.
  • the audio session list includes session information of all audio sessions currently running by the first device.
  • the monitoring module 410 of the first device maintains an audio session list (audio session list).
  • the audio session list includes session information such as a session window identifier (eg, pid), a session browser identifier (eg, parentId), a session state (eg, state), and a session path (eg, path).
  • the session window identifier refers to the identifier of the window where the video playback interface 502 corresponding to the audio session is located.
  • the session browser identifier refers to the identifier of the browser where the video playback interface 502 corresponding to the audio session is located.
  • the session state includes active state and inactive state.
  • the state of the audio session is the active state; if the audio state is pause/stop (pause/stop) or exit ( exit), the state of the audio session is inactive.
  • the session path refers to the file path corresponding to the audio session, such as the application path.
  • the session browser identifier can be used to identify the audio corresponding to the tab page dragged by the user when there are multiple tab pages playing audio in the browser.
  • the audio session list includes session information of the multiple audio sessions.
  • the monitoring module 410 of the first device may associate the session information in the audio session list with the acquired window process information, and determine the corresponding relationship between the dragged window and the audio session. That is, it is determined which audio session the dragged window corresponds to.
  • the session path of the session the path of the dragged window in the window process information, then it can be determined that the audio session corresponds to the dragged window.
  • the monitoring module 410 of the first device may send the information to the audio output control module 420 for the audio output control module 420 to further determine
  • the first device extends the displayed expansion mode and expansion ratio to the second device. This part of the content will be introduced in detail below.
  • the first part of the video playing interface 502 is used for displaying on the display screen of the first device, and the second part of the video playing interface 502 is used for displaying on the display screen of the second device.
  • the first part of the video playback interface 502 may be the video playback interface 502-1 in FIG. 5B
  • the second part of the video playback interface 502 may be the video playback interface 502 in FIG. 5B -2.
  • the first device may calculate the specific size of the video playback interface 502 displayed across devices according to the specific real-time position of the video application window 501 on the display screen of the first device, and according to the calculated size, in combination with the size of the video playback interface 502.
  • Configure parameters to determine the first part and the second part of the video playback interface 502 may include but are not limited to the controls displayed on the video playback interface 502 (icons, text, etc., and the specific display position and/or size of each icon, text, etc.), and about the application interface
  • the configuration parameters reference may be made to conventional technologies, which are not limited in this application.
  • the first device can combine the display position and size of each icon, text, etc. on the video playback interface 502 to split the video playback interface 502 into the first part and the first part. the second part.
  • the first part includes the interface configuration parameters on the video playback interface 502 within the range of x1 ⁇ H; the second part includes the interface configuration parameters on the video playback interface 502 within the range of x2 ⁇ H.
  • the first device collects audio data corresponding to the video playback interface 502.
  • the first device may collect audio data corresponding to the video playback interface 502 through the virtual audio module 430 shown in FIG. 4 .
  • the first device may control the virtual audio module 430 through the audio output control module 420 to capture the interface (that is, the video playback interface 502) in the interface window (that is, the video application window 501) dragged by the user. corresponding audio data.
  • the audio output control module 420 may send a data collection instruction to the virtual audio module 430, wherein the data collection instruction carries the window identifier corresponding to the video application window 501, the interface identifier corresponding to the video playback interface 502, or the application identifier corresponding to the video application. etc., for the virtual audio module 430 to collect corresponding audio data according to the identifier.
  • step In S604 the first device may only collect audio data corresponding to the video playback interface 502 through the virtual audio module 430. In the case where there are multiple audio sessions running on the first device, in step S604, the first device needs to collect audio data corresponding to all application interfaces played on the first device through the virtual audio module 430.
  • the audio data collected by the virtual audio module 430 is a mixed stream of audio stream 1 (eg, session1), audio stream 2 (eg, session2), . . . , and audio stream N (eg, sessionN).
  • the audio stream 1 is the audio data corresponding to the application 1
  • the audio stream 2 is the audio data corresponding to the application 2, . . .
  • the audio stream N is the audio data corresponding to the application N.
  • the virtual audio module 430 can collect the above-mentioned audio data from the endpoint buffer.
  • the first device may control the virtual audio module 430 through the audio output control module 420 to remove the audio data (such as audio stream I) corresponding to the dragged window from the mixed stream. It is filtered out and transmitted to the audio output control module 420.
  • the audio output control module 420 can instruct the virtual audio module 430 to filter out the audio data corresponding to the session information from the mixed stream by sending a filtering instruction carrying the session information of the audio session corresponding to the dragged window to the virtual audio module 430 .
  • the virtual audio module 430 transmits the audio stream I to the audio output control module 420 through one signal, and transmits the audio stream 1 ⁇ (I-1) and the audio stream (I+1) ⁇ N through another signal To the audio output control module 420.
  • the first device sends the first extended data to the second device according to the real-time position of the video application window 501 .
  • the first extension data includes first video data and first audio data.
  • the first video data is the video data corresponding to the second part of the video playing interface 502
  • the first audio data includes the audio stream corresponding to the video playing interface 502 .
  • the first audio data in the first extended data may also include the volume (eg, volume 2) when the second device plays the audio stream corresponding to the video playback interface 502, which is used to indicate the second device The second part of the video playback interface 502 is played at volume 2.
  • volume eg, volume 2
  • the audio output control module 420 of the first device may determine an extended mode of extended display from the first device to the second device according to the first part and the second part of the video playback interface 502 sent by the monitoring module 410 .
  • the expansion mode may include left and right expansion and up and down expansion.
  • the left-right expansion refers to the expansion in a lateral movement manner, that is, the expansion method as shown in FIG. 8 .
  • Up and down expansion refers to the expansion in a vertical movement.
  • the monitoring module 410 needs to obtain the distance y1 between the upper edge of the video application window 501 and the lower edge of the display screen of the first device, and The distance y2 between the lower edge of the video application window 501 and the upper edge of the display screen of the second device, where y1+y2 is equal to the height H of the video application window 501 .
  • the embodiments of the present application take the left-right expansion as an example, and for the specific implementation process of the up-down expansion, reference may be made to the specific implementation process of the up-down expansion.
  • the first device sends the second device to the second device.
  • the first video data in the extended data may specifically include interface configuration parameters corresponding to the second part of the video playback interface 502; the first audio data in the first extended data may specifically include the audio stream corresponding to the video playback interface 502 (as shown in FIG. 9) and volume (volume 2 shown in Figure 9).
  • the first extended data is used for the second device and the first device to display the video playback interface 502 in a distributed manner across the devices in a splicing manner, and to synchronously play the audio corresponding to the video playback interface 502 .
  • the first device may send the first extended data to the audio channel of the second device to instruct the second device and the first device to display the video playback interface 502 in a distributed manner across the devices in a splicing manner, and The audio corresponding to the video playback interface 502 is played synchronously.
  • the aforementioned volume 2 may be determined by the first device according to the volume (eg, volume 1) when the first device currently plays the audio corresponding to the video playback interface 502 .
  • volume 2 volume 1.
  • the first device may further control the volume 1 and the volume of the first device and the second device to play the audio corresponding to the video playback interface 502 based on the expansion ratio of the first device to the second device. volume 2.
  • the audio output control module 420 of the first device may determine the expansion ratio of the first device to the second device according to the first part and the second part of the video playback interface 502 sent by the monitoring module 410 .
  • the expansion ratio refers to x1/x2.
  • the expansion ratio refers to y1/y2.
  • the audio output control module 420 of the first device can calculate the volume 1 when the first device plays the audio stream 1 according to the expansion ratio of the first device to the second device, and the second device plays the audio stream Volume 2 at I.
  • volume 1/volume 2 x1/x2.
  • the first device plays the video playback interface 502 with the initial volume (eg, volume 0)
  • volume 1 volume 0 ⁇ x1/L
  • volume 2 volume 0 ⁇ x2/ L
  • L is the width of the video application window 501 .
  • the first device displays the first part of the video playback interface 502, and the second device displays the second part of the video playback interface 502; the first device and the second device play the audio corresponding to the video playback interface 502 in a distributed and synchronous manner.
  • the first device and the second device play the audio corresponding to the video playback interface 502 in a distributed and synchronous manner, as shown in FIG. 5B , the first device plays the audio corresponding to the video playback interface 502 with volume 1, and the second device plays the audio corresponding to the video playback interface 502 with volume 2 Audio corresponding to the video playback interface 502 .
  • the audio playback module 440 of the first device and the audio playback module of the second device can play video playback at volume 1 and volume synchronously, respectively. Audio corresponding to interface 502 .
  • volume 1/volume 2 x1/x2.
  • volume 1 volume 0 ⁇ x1/L
  • volume 2 volume 0 ⁇ x2/L
  • L is the width of the video application window 501
  • volume 0 is the first device playing in the state shown in FIG. 5A .
  • the initial volume of the video playback interface 502 (eg, volume 0).
  • the audio playback module 440 of the first device needs to play the audio corresponding to the video playback interface 502 with a volume of 1, and simultaneously play the audio corresponding to other audio sessions (that is, the audio streams 1-( I-1) and audio corresponding to audio streams (I+1)-N).
  • the audio channel of the second device sets the volume 2 when playing the audio stream 1 through the player (eg, player), and then plays the audio through the speaker.
  • the audio playback module 440 of the first device sets the volume 1 when playing the audio stream I through the player (such as the player), and then sets the volume of the audio stream I, the audio stream 1 ⁇ (I-1) and the audio stream (I+ 1) After ⁇ N is mixed, the mixed audio stream is played through the speaker.
  • the audio output control module 420 of the first device determines the audio stream corresponding to the audio session filtered by the virtual audio module 430. After (audio stream 1 shown in FIG. 11 ), instruct the audio playback module 440 of the first device and the audio playback module of the second device to play the audio corresponding to the video playback interface 502 with volume 1 and volume 2, respectively.
  • the audio channel of the second device sets the volume 2 when playing the audio stream 1 through the player (eg, player), and then plays the audio through the speaker.
  • the audio playing module 440 of the first device sets the volume 1 when playing the audio stream I through the player (eg, player), and then directly plays the audio through the speaker.
  • a distributed audio playback method may further include the following steps S607-S608:
  • the second extension data includes second video data and second audio data.
  • the first device completely transfers the audio corresponding to the video application window 501 and the video playback interface 502 to the second device.
  • the second video data is the video data corresponding to the video playing interface 502
  • the second audio data includes the audio stream corresponding to the video playing interface 502 .
  • the second audio data in the second extended data may also include the volume (eg, volume 3) when the second device plays the audio stream corresponding to the video playback interface 502, which is used to indicate the second device Play the video playback interface 502 at volume 2.
  • volume eg, volume 3
  • the first device may send the second extended data to the audio channel of the second device to instruct the second device to display the video playback interface 502 and synchronously play the audio corresponding to the video playback interface 502 .
  • the second device displays the video application window 501 , and plays and displays the audio corresponding to the video playback interface 502 in the video application window 501 .
  • the second device may play audio corresponding to the video playback interface 502 at a default volume.
  • the default volume is the volume currently set by the audio playback module of the second device.
  • the second device may play the audio corresponding to the video playback interface 502 at volume 3.
  • volume 3 volume 0
  • volume 0 is the initial volume of the video playback interface 502 played by the first device in the state shown in FIG. 5A .
  • the audio channel of the second device may set the volume 3 when playing the audio stream 1 through a player (eg, player), and then play the audio through the speaker.
  • a player eg, player
  • the first device can realize the effect of automatic adaptive switching of audio according to the distributed display situation of the interface in the process of extending and displaying the interface including audio to the second device. For example, when the user has not dragged the interface window or has not yet dragged the interface window out of the display screen range of the first device, only the audio corresponding to the interface is played by the first device. When the user drags the interface window to a state where a part of the interface window is displayed on the display screen of the first device and the other part is displayed on the display screen of the second device, the audio corresponding to the interface is played in a distributed and synchronous manner.
  • the first device may also control the first device when the user drags the interface window to a state in which the first part is displayed on the display screen of the first device and the second part is displayed on the display screen of the second device.
  • the size of the respective volumes when a device and a second device play audio corresponding to the interface synchronously.
  • the first device and the second device are controlled to synchronously play the audio corresponding to the interface to gradually change the volume.
  • the volume of the first device becomes smaller and smaller, and the volume of the second device becomes larger and larger. In this way, the change of the volume can be automatically adapted to the change of the interface extension, and a better user experience has been provided.
  • the channel between the first device and the second device may be a channel for transmitting audio, transmitting video, transmitting control signaling, transmitting parameters, etc., that is, all interactive data are transmitted from this channel. For data interaction, all interactive data must meet the data format of this channel.
  • the channel between the first device and the second device may also be multiple channels, which are respectively used for one or more of audio transmission, video transmission, control signaling transmission, transmission parameter transmission, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供了一种分布式音频播放方法及电子设备,能够应用于第一设备和第二设备,可以将第一设备中应用程序窗口对应的音频,根据应用程序窗口的位置,适应性地将音频在第一设备和第二设备间配置并播放。提高了用户在使用多个设备时的音频使用体验。

Description

一种分布式音频播放方法及电子设备
本申请要求于2021年2月28日提交国家知识产权局、申请号为202110222215.8、申请名称为“一种分布式音频播放方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子技术领域,尤其涉及一种分布式音频播放方法及电子设备。
背景技术
随着终端技术及显示技术的发展,多设备分布式显示给人们生活带来了越来越多的便利。多设备分布式显示是指通过多个电子设备实现界面显示。例如,通过多个电子设备可以实现应用界面的扩展显示。
示例性的,图1中的(a)示出了设备B与设备A通过拼接方式跨设备分布式显示一个界面的场景。如图1中的(a)所示,界面a1显示在设备A的显示屏上,界面a2显示在设备B的显示屏上,其中界面a1和界面a2从几何角度分别为组成界面a的两部分。又如图1中的(b)所示,设备A上显示的界面a扩展显示到设备B,其中,在扩展显示之后,界面a由设备A转移显示在设备B上。在一些场景中,例如通过拖动界面实现设备A上界面扩展显示到设备B的场景中,设备A和设备B上界面的显示过程通常包括如图1中的(a)所示场景→图1中的(b)所示场景。其中,在图1中的(a)或图1中的(b)所示的场景中,设备A通常被称为主设备,设备B通常被称为扩展设备或者副设备。
当界面在多设备上分布式显示时,通常需要由用户手动切换音频播放设备,以达到音频向扩展设备的切换。该方法对于用户来说操作繁琐,用户体验较差。另外,该方法无法实现音频随着界面的分布式显示情况适应性切换。例如,在图1中的(a)所示的扩展显示场景中,则无法按照常规方法实现音频的合理切换。
发明内容
本申请实施例提供一种分布式音频播放方法及电子设备,能够应用于第一设备和第二设备,可以将第一设备中窗口对应的音频,根据窗口的位置,适应性地将音频在第一设备和第二设备间分配并播放。提高了用户在使用多个设备时的音频使用体验。
第一方面,本申请提供一种分布式音频播放方法,第一设备显示第一应用窗口,第一设备播放第一应用窗口对应的第一音频。响应于第一用户操作,第一设备显示第一应用窗口的第一部分,第二设备接收第一设备发送的视频数据,根据视频数据显示第一应用窗口的第二部分,此时第一设备播放第一音频,第二设备播放第一音频。响应于第二用户操作,第二设备接收第一设备发送的视频数据,根据视频数据显示第一应用窗口,此时第二设备播放第一音频。
在一种可能的设计中,第一设备监测第一应用窗口在第一设备对应的显示区域中的位置,根据第一应用窗口的位置判断是否发送视频数据至第二设备。当第一应用窗口在第一设备对应的显示区域时,不发送视频数据至第二设备。当第一应用窗口的第一部分在第一设备对应的显示区域,第一应用窗口的第二部分在第二设备对应的显示 区域时,发送视频数据至第二设备。当第一应用窗口在第二设备对应的显示区域时,发送视频数据至第二设备。
在一种可能的设计中,当第一应用窗口的音频状态为正在播放状态时,第一设备发送第一音频至第二设备。当第一应用窗口的音频状态为暂停播放、静音、退出播放状态中的至少一项时,不发送第一音频至第二设备。
在一种可能的设计中,第一设备和第二设备以相同的音量播放第一音频。
在一种可能的设计中,第一设备以第一音量播放第一音频,第二设备以第二音量播放第一音频,第一音量和第二音量为第一设备根据第一应用窗口的位置得到的。
在一种可能的设计中,第一设备根据第一应用窗口的标识,从音频会话列表中获取和第一应用窗口关联的音频流,根据第一应用窗口关联的音频流生成第一音频。
在一种可能的设计中,第一应用窗口的标识包括窗体标识、游览器标识和路径中的至少一项。
在一种可能的设计中,第一应用窗口对应的第一应用运行在第一设备上。
在一种可能的设计中,第一用户操作和第二用户操作为拖动第一应用窗口的操作。
在一种可能的设计中,第一设备为笔记本电脑,第二设备为平板电脑。
第二方面,本申请提供一种电子设备,包括:一个或多个处理器,一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器在执行计算机指令时,使得终端设备执行一种分布式音频播放方法。
第三方面,本申请提供一种计算机存储介质,计算机存储介质存储有计算机程序,计算机程序包括程序指令,程序指令当被处理器执行时,执行一种分布式音频播放方法。
本申请实施例提供的一种分布式音频播放的方法,在用户使用多个电子设备时,能够根据窗口的位置对窗口对应的音频进行自动管理,无需用户手动切换,提高了多设备情况下的音频使用体验。
附图说明
图1为本申请实施例提供的两种扩展显示场景示例图;
图2为本申请实施例提供的一种电子设备的硬件结构示意图;
图3为本申请实施例提供的一种电子设备的软件结构示意图;
图4为本申请实施例提供的一种分布式音频播放框架示例图;
图5A为本申请实施例提供的一种第一设备播放视频的场景示例图;
图5B为本申请实施例提供的一种第一设备向第二设备扩展显示视频播放界面的场景示例图;
图5C为本申请实施例提供的另一种第一设备向第二设备扩展显示视频播放界面的场景示例图;
图6为本申请实施例提供的一种分布式音频播放方法流程图;
图7为本申请实施例提供的一种分布式音频播放过程中第一设备的内部实现逻辑框图一;
图8为本申请实施例提供的一种确定应用界面第一部分和第二部分的方法示意图;
图9为本申请实施例提供的一种分布式音频播放过程中第一设备的内部实现逻辑框图二;
图10为本申请实施例提供的一种音频播放模块内部实现逻辑框图一;
图11为本申请实施例提供的一种音频播放模块内部实现逻辑框图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供一种分布式音频播放方法,该方法应用于多设备分布式显示的过程中。例如,该方法可以应用于图1中的(a)或图1中的(b)所示的设备A(即第一设备)和设备B(即第二设备)分布式扩展显示的场景中。
其中,本申请中的第一设备和第二设备之间建立通信连接。第一设备和第二设备可以借助建立的通信连接,实现第一设备与第二设备之间的信息传输。其中,第一设备与第二设备之间传输的信息包括但不限于应用界面配置参数、视频数据、音频数据和控制指令等。
例如,第一设备与第二设备之间可以通过“碰一碰”、“扫一扫”(如扫描二维码或条形码)、“靠近自动发现”(如借助蓝牙(bluetooth,BT)或无线保真(wireless fidelity,WiFi))等方式建立无线通信连接。其中,第一设备与第二设备之间可以遵循无线传输协议,通过无线连接收发器传输信息。其中,该无线传输协议可以包含但不限于蓝牙传输协议或WiFi传输协议等。例如,WiFi传输协议可以是WiFi P2P传输协议。该无线连接收发器包含但不限于蓝牙,WiFi等收发器。第一设备与第二设备之间通过建立的无线通信连接实现信息传输。
又如,第一设备与第二设备之间可以建立有线通信连接。例如,第一设备与第二设备之间通过视频图像配接器(video graphics array,VGA)、数字视频接口(digital visual interface,DVI)、高清多媒体接口(high definition multimedia interface,HDMI)或数据传输线等建立有线通信连接。第一设备与第二设备之间通过建立的有线通信连接实现信息传输。本申请不限定第一设备与第二设备之间的具体连接方式。
本申请中的电子设备(如第一设备和第二设备)包括一个或多个显示屏。例如,该电子设备可以是智能手机、上网本、平板电脑、智能相机、掌上电脑、个人数字助理(personal digital assistant,PDA)、便携式多媒体播放器(portable multimedia player,PMP)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、个人计算机(personal computer,PC)、超级移动个人计算机(ultra-mobile personal computer,UMPC)等。或者,电子设备还可以是其它类型或结构的包括显示 屏的电子设备,本申请不限定。
请参考图2,图2以智能手机为例,示出了本申请实施例提供的一种电子设备的硬件结构示意图。如图2所示,电子设备可以包括处理器210,存储器(包括外部存储器接口220和内部存储器221),通用串行总线(universal serial bus,USB)接口230,充电管理模块240,电源管理模块241,电池242,天线1,天线2,移动通信模块250,无线通信模块260,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294,以及用户标识模块(subscriber identification module,SIM)卡接口295等。其中传感器模块280可以包括触摸传感器280A和指纹传感器280B。进一步的,在一些实施例中,传感器模块280还可以包括陀螺仪传感器,加速度传感器,磁传感器,压力传感器,气压传感器,距离传感器,接近光传感器,温度传感器,环境光传感器或骨传导传感器等中的一个或多个。
可以理解的是,本发明实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元。例如:处理器210可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器210可以是电子设备的神经中枢和指挥中心。处理器210可以根据指令完成取指令,产生操作控制信号,进而执行指令的控制。
在本申请一些实施例中,处理器210(例如处理器210中的控制器)可以用于控制音频模块270采集被拖动界面窗口对应的音频数据,进行音频切换决策,以及控制被拖动界面窗口对应的音频数据向相应设备的输出等。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。
在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
充电管理模块240用于从充电器接收充电输入。其中,充电器可以是无线充电器, 也可以是有线充电器。在一些有线充电的实施例中,充电管理模块240可以通过USB接口230接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块240可以通过电子设备的无线充电线圈接收无线充电输入。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为电子设备供电。
电源管理模块241用于连接电池242,充电管理模块240与处理器210。电源管理模块241接收电池242和/或充电管理模块240的输入,为处理器210,内部存储器221,显示屏294,摄像头293,和无线通信模块260等供电。电源管理模块241还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在一些实施例中,电源管理模块241也可以设置于处理器210中。在另一些实施例中,电源管理模块241和充电管理模块240也可以设置于同一个器件中。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块250,无线通信模块260,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频段。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块250可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块250可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块250可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块250还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块250的至少部分功能模块可以被设置于处理器210中。在一些实施例中,移动通信模块250的至少部分功能模块可以与处理器210的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器270A、受话器270B等)输出声音信号,或通过显示屏294显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器210,与移动通信模块250或其它功能模块设置在同一个器件中。
无线通信模块260可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如WiFi网络),蓝牙BT,全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块260可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块260经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块260还可以从处理器210接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块250耦合,天线2和无线通信模块260耦合,使得电子设备可以通过无线通信技术与网络以及其它设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。在本申请一些实施例中,第一设备和第二设备可以借助各自的天线和移动通信模块,基于无线通信技术,实现第一设备和第二设备之间的信息传输。
电子设备通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。在本申请实施例中,显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏294,N为大于1的正整数。
在本申请实施例中,GPU可以用于渲染应用界面,对应的,显示屏294可以用于显示GPU渲染的应用界面。例如,在图1中的(a)所示的场景中,第一设备的GPU可以用于在向第二设备扩展显示之前渲染界面a,对应的,第一设备的显示屏可以用于显示GPU渲染的界面a。在第一设备向第二设备扩展显示之后,第一设备的GPU可以用于在向第二设备拓展显示之前渲染界面a,并将界面a拆分成界面a1和界面a2,将界面a1在第一设备的显示屏上显示,将界面a2在第二设备的显示屏上显示。
电子设备可以通过ISP,摄像头293,视频编解码器,GPU,显示屏294以及应用处理器等实现拍摄功能。
外部存储器接口220可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口220与处理器210通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器, 例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器210通过运行存储在内部存储器221的指令,和/或存储在设置于处理器中的存储器的指令,执行便携设备的各种功能应用以及数据处理。
触摸传感器280A可以称为“触控面板”。触摸传感器280A可以设置于显示屏294,由触摸传感器280A与显示屏294组成触摸屏,也称“触控屏”。触摸传感器280A用于检测作用于其上或附近的触摸操作。触摸传感器280A可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。电子设备可以通过显示屏294提供与触摸操作相关的视觉输出等。在另一些实施例中,触摸传感器280A也可以设置于电子设备的表面,与显示屏294所处的位置不同。在本申请实施例中,第一设备的触摸传感器可以用于检测用户在第一设备显示屏上对界面窗口的拖动操作。
指纹传感器280B用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁、访问应用锁、指纹拍照、指纹接听来电等。
电子设备可以通过音频模块270,扬声器270A,受话器270B,麦克风270C以及应用处理器等实现音频功能。例如音乐播放,录音等。关于音频模块270,扬声器270A,受话器270B和麦克风270C的具体工作原理和作用,可以参考常规技术中的介绍。
在本申请一些实施例中,音频模块270可以用于采集被拖动窗口对应的音频数据。扬声器270A可以用于输出被拖动窗口对应的音频数据。麦克风270C可以用于采集环境中的声音,例如在电话视频的过程中,采集用户的声音。
按键290包括开机键,音量键等。按键290可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。
需要说明的是,图2所示电子设备包括的硬件模块只是示例性地描述,并不对电子设备的具体结构做出限定。例如,电子设备还可以包括其他功能模块。
其中,在本申请中,电子设备的操作系统可以包括但不限于塞班
Figure PCTCN2021140181-appb-000001
安卓
Figure PCTCN2021140181-appb-000002
苹果
Figure PCTCN2021140181-appb-000003
黑莓
Figure PCTCN2021140181-appb-000004
鸿蒙(Harmony)等操作系统,本申请不限定。
请参考图3,图3以Android操作系统为例,具体介绍本申请实施例中电子设备的软件结构示意图。
如图3所示,Android操作系统可以包括应用程序层,应用程序框架层(framework,FWK)、系统库、安卓运行时和内核层(kernel)。
其中,应用程序层可以提供一些核心应用程序。为方便描述,以下将应用程序简称为应用。应用程序层中的应用可以包括原生的应用(如在电子设备出厂前,安装操作系统时安装在电子设备中的应用),例如图3所示的相机、地图、音乐、短消息、图库、电子邮件、通信录和蓝牙等。应用程序层中的应用也可以包括第三方应用(如用户通过应用商店下载安装的应用),例如图3所示的微信
Figure PCTCN2021140181-appb-000005
和视频应用等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图3所示,应用程序框架层可以包括窗口管理服务器(window manager service,WMS),活动管理服务器(activity manager service,AMS),输入事件管理服务器(input manager  service,IMS),资源管理器,通知管理器和视图系统等。
其中,WMS主要用于管理窗口程序。窗口管理服务器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
AMS主要用于负责管理Activity,负责系统中各组件的启动、切换、调度及应用程序的管理和调度等工作。
IMS主要用于对原始输入事件进行翻译、封装等处理,得到包含更多信息的输入事件,并发送到WMS,WMS中存储有每个应用程序的可点击区域(比如控件)、焦点窗口的位置信息等。因此,WMS可以正确的将输入事件分发到指定的控件或者焦点窗口。例如,在本申请实施例中,WMS可以用于将接收到的窗口拖动事件分发到指定的控件或者焦点窗口。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统主要用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。例如,在本申请实施例中,视图系统可以用于构建电子设备上显示的应用界面上的文字控件、图片控件等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。例如,在本申请实施例中,通知管理器可以用于在第一设备检测到用户向第二设备拖动界面窗口的操作时,通知用户正在扩展显示。进一步的,通知管理器还可以用于在被拖动界面窗口完全由第一设备扩展至第二设备时,通知用户扩展显示已完成。
资源管理器主要用于为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。例如,在本申请实施例中,资源管理器可以与用于为电子设备上显示的应用界面提供本地化字符串,图标,图片,布局文件或视频文件等。
系统库和安卓运行时包含FWK所需要调用的功能函数,Android的核心库,以及Android虚拟机。系统库可以包括多个功能模块。例如:表面管理器、三维图形处理库、二维图形引擎和媒体库等。
内核层是Android操作系统的基础,Android操作系统最终的功能都是通过内核层完成。内核层可以包含显示驱动,输入/输出设备驱动(例如,键盘、触摸屏、耳机、扬声器、麦克风等),设备节点,蓝牙驱动,摄像头驱动,音频驱动以及传感器驱动等。用户通过输入设备进行输入操作,内核层可以根据输入操作产生相应的原始输入事件,并存储在设备节点中。
在本申请实施例中,传感器驱动可以检测到用户在电子设备显示屏上的滑动操作,当该滑动操作是对某一界面窗口的拖动操作时,传感器驱动则向应用程序框架层的IMS上报该窗口拖动事件。WMS收到IMS上报的拖动事件,会将该窗口拖动事件分发给对应的应用,以便触发应用界面窗口随着用户的拖动在电子设备界面上的移动。
需要说明的是,图3仅以分层架构的Android系统为例,介绍一种电子设备的软 件结构。本申请不限定电子设备(包括第一设备和第二设备)软件系统的具体架构,关于其他架构的软件系统的具体介绍,可以参考常规技术。
如上文所述,本申请实施例提供的一种分布式音频播放方法应用于多设备分布式显示的过程中。作为一种实现方式,本申请实施例中多设备的分布式显示可以基于用户对应用窗口的拖动来实现。例如,基于用户将第一设备(如图1中的(b)所示的设备A)上显示的界面(如1中的(b)所示的界面a)窗口拖动至扩展设备(即第二设备,如图1中的(b)所示的设备B)的操作,第一设备上被拖动的界面扩展显示到第二设备。其中,在用户将第一设备(即设备A)上显示的界面(即界面A)窗口向第二设备(即设备B)拖动的过程中,界面A的第一部分(如图1中的(a)所示的界面a1)显示在第一设备(即设备A)的显示屏上,界面a的第二部分(如图1中的(a)所示的界面a2)显示在第二设备(即设备B)的显示屏上。
在类似上述用户将第一设备上显示的界面窗口拖动至第二设备的场景中,本申请实施例提供的分布式音频播放方法可以实现音频随着界面的分布式显示情况适应性切换。具体的,在用户还未拖动界面窗口或者还未将界面窗口拖动出第一设备显示屏范围时,仅由第一设备播放界面对应的音频。在用户将界面窗口拖动至如图1中的(a)所示的跨设备分布式显示的状态时,界面对应的音频切换至第二设备,第二设备和第一设备分布式同步播放界面对应的音频。在用户将界面窗口拖动至如图1中的(b)所示的转移显示的状态时,界面对应的音频由第一设备切换至第二设备,第一设备不再播放界面对应的音频。
进一步的,在一些实施例中,在界面被用户由第一设备向第二设备拖动的过程中,第一设备可以根据界面的第一部分(如图1中的(a)所示的界面a1)以及第二部分(如图1中的(a)所示的界面a2)分别在第一设备和第二设备显示屏上的比例,适应性调整第一设备和第二设备分布式同步播放界面对应的音频时,各自音量的大小。
本申请实施例提供的分布式音频播放方法可以基于图4所示的框架实现。如图4所示,本申请实施例提供的分布式音频播放框架400可以包括监控模块410、音频输出控制模块420、虚拟音频模块430和音频播放模块440。
其中,监控模块410主要用于监控第一设备上显示的所有界面窗口的位置,以便及时检测到应用窗口被用户拖动至第二设备的窗口拖动事件。以及,在检测到应用窗口被用户拖动至第二设备的窗口拖动事件时,向音频输出控制模块420发送该窗口拖动事件,以便音频输出控制模块420截取(例如hook)该窗口拖动事件,根据该窗口拖动事件触发虚拟音频模块430采集被拖动界面窗口对应的音频数据。在一些实施例中,监控模块410可以只检测焦点窗口的位置,即用户最后一次操作的窗口的位置,或者是前台窗口的位置。
其中,截取是指在事件传送到终点前截获并监控事件的传输。例如,通过如hook技术可以在事件传送到终点前,像个钩子钩上事件,及时处理该事件,以及时响应该事件。在本申请实施例中,音频输出控制模块420可以在截取到窗口拖动事件时,第一时间对该窗口拖动事件做出响应。
进一步的,监控模块410还用于监控第一设备上显示的所有伴随有音频信息的界面对应的音频状态,例如正在播放、暂停播放、静音或者退出播放等,以便可以根据 具体音频状态,决定音频在第一设备和第二设备上分布式播放的具体策略。例如,图1中的(b)所示的场景中,在音频状态为正在播放时,第一设备将界面对应的音频由第一设备切换至第二设备播放;在音频状态为静音或暂停播放或退出播放时,放弃将界面对应的音频由第一设备切换至第二设备。
在一种可能的结构中,监控模块410的一部分可以位于电子设备的传感器模块(如图2所示的传感器模块280)中,一部分位于电子设备的音频模块(如图2所示的音频模块270)中。其中,监控模块410的功能可以一部分由电子设备的传感器模块(如图2所示的传感器模块280)实现,一部分由电子设备的音频模块(如图2所示的音频模块270)实现。例如,电子设备的传感器模块用于监控第一设备上显示的所有界面窗口的位置以及在检测到应用窗口被用户拖动至第二设备的窗口拖动事件时,向音频输出控制模块420发送该窗口拖动事件。电子设备的音频模块用于监控第一设备上显示的所有伴随有音频信息的界面对应的音频状态,例如正在播放、暂停播放、静音或者退出播放等。
在另一种可能的结构中,监控模块410可以位于电子设备的处理器(如图2所示的处理器210)中。例如,第一设备的监控模块410可以接收第一设备的传感器模块(如图2所示的传感器模块280)检测到的应用窗口被用户拖动至第二设备的窗口拖动事件,并将该窗口拖动事件发送给音频输出控制模块420。第一设备的监控模块410还可以接收第一设备的音频模块(如图2所示的音频模块270)检测到的第一设备上显示的所有伴随有音频信息的界面对应的音频状态。本申请不限定监控模块410在电子设备(如第一设备)中的具体设置方式。
音频输出控制模块420主要用于在监控模块410检测到用户将第一设备上的应用界面的窗口向第二设备拖动时,截取(例如hook)来自监控模块410的窗口拖动事件,根据该窗口拖动事件触发虚拟音频模块430采集被拖动界面窗口对应的音频数据。
其中,窗口拖动事件中还携带有窗口被拖动至的位置等信息。进一步的,音频输出控制模块420还用于根据窗口被拖动至的位置进行音频切换决策,例如是否将音频由第一设备切换至第二设备,如何切换音频(如分布式同步播放还是彻底切换等)。以及,音频输出控制模块420还用于在确定好如何切换音频之后,控制被拖动界面窗口对应的音频数据在第一设备和第二设备间的切换,例如控制被拖动界面窗口对应的音频数据向相应设备的输出。
进一步的,音频输出控制模块420还用于控制被拖动界面窗口对应的音频在第一设备和/或第二设备上播放的音量。例如通过向第一设备和/或第二设备发送控制指令,以控制被拖动界面窗口对应的音频在第一设备和/或第二设备上播放的音量。
在一种可能的结构中,音频输出控制模块420可以位于电子设备的处理器(如图2所示的处理器210)中。例如,频输出控制模块420可以位于处理器210中的控制器中。又如,音频输出控制模块420可以独立于图2所示的处理器210中的控制器,位于处理器210中。本申请不限定音频输出控制模块420在电子设备(如第一设备)中的具体设置方式。
虚拟音频模块430主要用于在监控模块410检测到第一设备上的界面窗口被用户向第二设备拖动的过程中,采集界面对应的音频数据,以便音频输出控制模块420在 根据窗口被拖动至的位置进行音频切换决策之后,控制虚拟音频模块430采集的被拖动界面窗口对应的音频数据向相应设备的输出。
在一种可能的结构中,虚拟音频模块可以位于电子设备的音频模块(如图2所示的音频模块270)中。具体的,图2所示的音频模块270可以包括第一音频模块和第二音频模块。其中,第一音频模块为常规音频模块,第二音频模块即虚拟音频模块430。
在另一种可能的结构中,虚拟音频模块还可以独立于图2所示的音频模块270。对于这种情况,需要在图2所示硬件结构图中新增虚拟音频模块430。在这种情况下,图2所示音频模块270即为常规音频模块。本申请不限定虚拟音频模块在电子设备(如第一设备)中的具体设置方式。
其中,常规音频模块主要用于将被拖动界面窗口对应的音频数据输入至第一设备的音频播放模块440播放,以及根据音频输出控制模块420的指示调整音频播放模块440播放被拖动界面窗口对应的音频数据时的音量大小。进一步的,常规音频模块还可以用于将第一设备上显示的未被拖动的界面窗口对应的音频数据输入至第一设备的音频播放模块440播放。
音频播放模块440主要用于接收来自常规音频模块的音频数据并播放音频。其中,来自常规音频模块的音频数据可以是被拖动界面窗口对应的音频数据,也可以是第一设备上显示的未被拖动的界面窗口对应的音频数据。对于来自常规音频模块的音频数据是被拖动界面窗口对应的音频数据的情况,音频播放模块440还用于根据常规音频模块对音量的控制指示相应的控制音频播放音量。在一种可能的结构中,音频播放模块440可以是图2中所示的扬声器270A。
需要说明的是,图4仅作为一种分布式音频播放框架的示例,其中监控模块410、音频输出控制模块420、虚拟音频模块430和音频播放模块440仅作为一种可能的模块划分方式,本申请实施例中的分布式音频播放框架可以具有比图4中所示出的更多的或者更少的模块,可以组合两个或更多的模块,或者可以具有不同的模块划分。另外,图4中所示出的各种模块可以在包括一个或多个信号处理或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
以下将以第一设备为笔记本电脑,第二设备为平板电脑,被拖动界面窗口为视频应用窗口,以由图5A→图5B→图5C状态变化的扩展场景为例,结合附图,对本申请实施例提供的一种分布式音频播放方法作具体介绍。
假设第一设备为笔记本电脑,第二设备为平板电脑,第一设备上显示有图5A所示的视频应用窗口501,其中,视频应用窗口501中包括视频播放界面502。第一设备响应于用户由图5A所示视频播放状态,通过拖动视频应用窗口501,拖动至图5B所示扩展显示状态的操作,第一设备和第二设备通过拼接方式跨设备分布式显示视频播放界面502。其中,在图5B所示的状态下,第一设备控制视频播放界面502对应的音频在第一设备和第二设备上分布式同步播放。
进一步的,第一设备响应于用户由图5B所示扩展显示状态,通过拖动视频应用窗口501,拖动至图5C所示扩展显示状态的操作,第一设备将视频播放界面502转移至第二设备。其中,在图5C所示的扩展显示状态下,第一设备控制视频播放界面502对应的音频由第一设备彻底切换至第二设备。
在一些实施例中,第一设备还可以响应于用户由图5A所示的视频播放状态,通过拖动视频应用窗口501,拖动至图5B所示扩展显示状态的操作,在第一设备和第二设备分布式同步播放视频播放界面502对应的音频时,控制第一设备和第二设备分别播放视频播放界面502对应的音频时的音量。
以下将对由图5A→图5B→图5C的状态变化的扩展场景中的分布式音频播放方法作详细介绍。
如图6所示,本申请实施例提供的一种分布式音频播放方法可以包括以下步骤S601-S606:
S601、第一设备显示视频播放界面502,且同步播放视频播放界面502对应的音频。其中,视频播放界面502位于视频应用窗口501中。
其中,第一设备显示视频播放界面502,且同步播放视频播放界面502对应的音频具体是指:第一设备在播放视频播放界面502中视频的同时,播放视频对应的音频。
示例性的,第一设备可以采用常规方法播放视频播放界面502中的视频和视频对应的音频。例如,第一设备可以通过第一设备的视频驱动从视频应用获取视频播放界面502对应的视频数据,送显至显示屏以通过显示屏播放该视频;同步地,第一设备可以通过第一设备的常规音频驱动(如上文所述常规音频模块对应的音频驱动)从视频应用获取视频播放界面502对应的音频数据,并通过扬声器播放该音频。关于第一设备播放同步播放视频及对应音频的具体实现过程,可以参考常规技术中的介绍,这里不做赘述。
S602、第一设备在检测到用户拖动视频应用窗口501的操作时,获取视频应用窗口501的实时位置信息。
示例性的,在本申请实施例中,第一设备可以通过传感器驱动检测用户在第一设备显示屏上的滑动操作。当上述滑动操作是对某一界面窗口(如图5A所示的视频应用窗口501)的拖动操作时,则第一设备获取视频应用窗口501的实时位置信息。
其中,视频应用窗口501的实时位置信息即视频应用窗口501在预设坐标系中的实时坐标信息。
示例性的,预设坐标系可以是第一设备的预设坐标系、世界坐标系或者地面坐标系等,本申请不限定。示例性的,第一设备的预设坐标系可以是第一设备显示屏对应的二维坐标系。以第一设备是图5A所示的笔记本电脑为例,该二维坐标系可以是笔记本电脑在图5A所示状态下,由笔记本电脑左下角为坐标原点O,下侧短边为x轴,左侧长边为y轴组成的坐标系。
需要说明的是,在本申请实施例中,在视频应用窗口501被用户拖动的过程中,第一设备会不断获取视频应用窗口501的实时位置信息。通过不断获取视频应用窗口501的实时位置信息,第一设备可以准确获取第一设备向第二设备扩展显示视频播放界面502的时机,以便可以及时做出响应。
在本申请实施例中,在第一设备检测到用户拖动视频应用窗口501的操作时,第一设备可以通过监控模块410实时监控视频应用窗口501的位置,以便可以通过传感器驱动实时获取视频应用窗口501的位置信息。
例如,第一设备的监控模块410通过实时监控传感器驱动,以及时截取(例如hook) 窗口拖动事件。如图7所示,监控模块410在监控到窗口拖动事件之后,截取该窗口拖动事件。并且,示例性的,在窗口被拖动的过程中,监控模块410实时获取被拖动窗口(如视频应用窗口501)的窗口进程信息。其中,窗口进程信息包括窗口的位置信息(position)。
例如,监控模块410可以通过传感器驱动实时获取被拖动窗口(如视频应用窗口501)的以下位置信息:被拖动窗口的左上角在预设坐标系中的坐标(如left_up_x,left_up_y),被拖动窗口的右下角在预设坐标系中的坐标(如right_down_x,right_down_y)。又如,监控模块410可以通过传感器驱动实时获取被拖动窗口(如视频应用窗口501)的以下位置信息:被拖动窗口的左下角在预设坐标系中的坐标(如left_down_x,left_down_y),被拖动窗口的右上角在预设坐标系中的坐标(如right_up_x,right_up_y)。
进一步的,如图7所示,被拖动窗口的窗口进程信息还可以包括窗体标识(如pid)、浏览器标识(如parentId)和路径(如path)等信息。如图7所示,在窗口被拖动的过程中,示例性的,监控模块410还可以获取被拖动窗口的窗体标识(如pid)、浏览器标识(如parentId)和路径(如path)等窗口进程信息。游览器标识可以用于当游览器中有多个标签页存在播放音频的行为,从多个标签页中筛选出播放音频的标签页。
S603、第一设备在检测到视频应用窗口501越过第一设备显示屏边缘时,获取视频播放界面502对应的音频状态,以及根据视频应用窗口501的实时位置确定视频播放界面502的第一部分和第二部分。
在一些实施例中,第一设备检测到视频应用窗口501越过第一设备的显示屏边缘,指的是视频应用窗口501存在部分窗口越过第一设备的显示屏对应的显示区域的边缘,视频应用窗口501的一部分显示在第二设备显示屏对应的显示区域中,其中第二设备显示屏对应的显示区域也是在第一设备上进行绘制和渲染等操作,可以是第一设备设置的虚拟显示区域,即不在第一设备上显示,但是可以用于应用的绘制和渲染等。
在本申请实施例中,视频播放界面502对应的音频状态可以包括但不限于以下中的任一种:正在播放、暂停播放、静音或者退出播放等。
其中,对于视频音频同步播放的场景,例如对于图5A所示的场景,视频正在播放但是音频静音则音频状态为静音,视频和音频均正在播放则音频状态为正在播放,视频和音频均暂停播放则音频状态为暂停播放,视频和音频均退出播放则音频状态为退出播放。对于单纯的音频播放场景,例如通过音乐应用播放音乐的场景中,静音是指播放时静音,正在播放是指正在播放音乐,暂停播放是指暂停播放音乐,退出播放是指退出播放音乐。
在本申请实施例中,第一设备可以通过监控模块410实时监控视频播放界面502对应的音频状态,以便在音频状态发生变化时,实时更新音频会话对应的状态。
请参考图7,如图7所示,第一设备的监控模块410通过实时监控会话通知单元,以获取被拖动窗口中界面对应的音频状态,如图7所示的正在播放(start)、暂停播放(pause/stop)、静音(mute)和退出播放(exit)。
示例性的,在本申请实施例中,监控模块410中维护有一个音频会话列表。该音频会话列表中包括第一设备当前正在运行的所有音频会话的会话信息。
如图7所示,第一设备的监控模块410中维护有音频会话列表(audio session list)。其中,音频会话列表中包括会话窗体标识(如pid)、会话浏览器标识(如parentId)、会话状态(如state)和会话路径(如path)等会话信息。会话窗体标识是指音频会话对应的视频播放界面502所在的窗口的标识。会话浏览器标识是指音频会话对应的视频播放界面502所在的浏览器的标识。会话状态包括激活状态和非激活状态,例如若音频状态为正在播放(start)和静音(mute),则音频会话的state为激活状态;若音频状态为暂停播放(pause/stop)或退出播放(exit),则音频会话的state为非激活状态。会话路径是指音频会话对应的文件路径,如应用路径等。会话游览器标识可以用于当游览器中,有多个标签页播放音频时,将用户拖动的标签页对应的音频识别出来。
在本申请实施例中,若第一设备中同时运行有多个音频会话,则音频会话列表中包括多个音频会话的会话信息。在这种情况下,第一设备的监控模块410可以将音频会话列表中的会话信息与获取的窗口进程信息进行关联,确定被拖动窗口与音频会话的对应关系。即,确定被拖动窗口对应哪一个音频会话。
示例性的,如图7所示,若音频会话列表中某一音频会话的会话窗体标识=窗口进程信息中被拖动窗口的窗体标识,或者音频会话列表中某一音频会话的浏览器标识=窗口进程信息中被拖动窗口的窗体标识,或者音频会话列表中某一音频会话的浏览器标识=窗口进程信息中被拖动窗口的浏览器标识,或者音频会话列表中某一音频会话的会话路径=窗口进程信息中被拖动窗口的路径,则可以确定该音频会话与被拖动窗口相对应。
在本申请实施例中,第一设备的监控模块410在确定视频播放界面502的第一部分和第二部分之后,可以将该信息发送给音频输出控制模块420,用于音频输出控制模块420进一步确定第一设备向第二设备扩展显示的扩展模式和扩展比例等。关于这部分内容,将在下文中具体介绍。
其中,视频播放界面502的第一部分用于在第一设备的显示屏上显示,视频播放界面502的第二部分用于在第二设备的显示屏上显示。例如,对于图5B所示的扩展显示状态,视频播放界面502的第一部分可以是图5B中的视频播放界面502-1,视频播放界面502的第二部分可以是图5B中的视频播放界面502-2。
示例性的,第一设备可以根据视频应用窗口501在第一设备显示屏上的具体实时位置,计算视频播放界面502跨设备显示的具体尺寸,以及根据计算得到的尺寸,结合视频播放界面502的配置参数,确定视频播放界面502的第一部分和第二部分。其中,视频播放界面502的配置参数可以包括但不限于视频播放界面502上显示的控件(图标、文字等,以及每一个图标、文字等的具体显示位置和/或大小等),关于应用界面的配置参数,具体可以参考常规技术,本申请不限定。
如图8所示,假设视频应用窗口501的宽度为L,高度为H,视频应用窗口501的左侧边缘与第一设备显示屏右侧边缘的距离为x1,则可以计算得到视频应用窗口501的右侧边缘与第二设备显示屏左侧边缘的距离为x2,其中x2=L-x1。第一设备在得到视频播放界面502跨设备显示的具体尺寸x1和x2之后,可以结合视频播放界面502上每一个图标、文字等的显示位置和大小,将视频播放界面502拆分为第一部分和第二部分。其中,第一部分包括视频播放界面502上,x1×H范围内的界面配置参 数;第二部分包括视频播放界面502上,x2×H范围内的界面配置参数。
S604、第一设备采集视频播放界面502对应的音频数据。
在本申请实施例中,第一设备可以通过如图4所示的虚拟音频模块430采集视频播放界面502对应的音频数据。
示例性的,在本申请实施例中,第一设备可以通过音频输出控制模块420控制虚拟音频模块430采集被用户拖动的界面窗口(即视频应用窗口501)中界面(即视频播放界面502)对应的音频数据。例如,音频输出控制模块420可以向虚拟音频模块430发送数据采集指令,其中该数据采集指令中携带有视频应用窗口501对应的窗口标识,视频播放界面502对应的界面标识或者视频应用对应的应用标识等,用于虚拟音频模块430根据标识采集对应的音频数据。
需要说明的是,本申请图5A,图5B和图5C所示示例是以第一设备上运行有一个应用界面的音频会话(即视频播放界面502对应的音频会话)为例,因此,在步骤S604中,第一设备可以仅通过虚拟音频模块430采集视频播放界面502对应的音频数据。对于第一设备上运行有多个音频会话的情况,在步骤S604中,第一设备需要通过虚拟音频模块430采集第一设备上播放的所有应用界面对应的音频数据。假设第一设备上运行有应用1、应用2、……、应用N对应的音频会话(其中N为正整数,N≥3),在这种情况下,如图9所示,第一设备的虚拟音频模块430采集到的音频数据为音频流1(如session1)、音频流2(如session2)、……、音频流N(如sessionN)的混流。其中,音频流1为应用1对应的音频数据,音频流2为应用2对应的音频数据,……,音频流N为应用N对应的音频数据。其中,如图9所示,虚拟音频模块430可以从端点缓冲区采集上述音频数据。
示例性的,如图9所示,在本申请实施例中,第一设备可以通过音频输出控制模块420控制虚拟音频模块430将被拖动窗口对应的音频数据(如音频流I)从混流中过滤出来,并且分流传输给音频输出控制模块420。例如,音频输出控制模块420可以通过向虚拟音频模块430发送携带有被拖动窗口对应的音频会话的会话信息的过滤指令,以指示虚拟音频模块430从混流中过滤出会话信息对应的音频数据。如图9所示,虚拟音频模块430将音频流I通过一路信号传输给音频输出控制模块420,将音频流1~(I-1)和音频流(I+1)~N通过另一路信号传输给音频输出控制模块420。
S605、第一设备根据视频应用窗口501的实时位置,向第二设备发送第一扩展数据。第一扩展数据包括第一视频数据和第一音频数据。
其中,第一视频数据即视频播放界面502的第二部分对应的视频数据,第一音频数据包括视频播放界面502对应的音频流。
进一步的,在本申请实施例中,第一扩展数据中的第一音频数据还可以包括第二设备播放视频播放界面502对应的音频流时的音量(如音量2),用于指示第二设备以音量2播放视频播放界面502的第二部分。
在本申请一些实施例中,第一设备的音频输出控制模块420可以根据监控模块410发送的视频播放界面502的第一部分和第二部分,确定第一设备向第二设备扩展显示的扩展模式。其中,扩展模式可以包括左右扩展和上下扩展。左右扩展是指以横向移动的方式进行扩展,即如图8所示的扩展方式。上下扩展是指以竖向移动的方式进行 扩展。
需要说明的是,若第一设备向第二设备扩展显示的扩展模式为上下扩展,则监控模块410需要获取视频应用窗口501的上侧边缘与第一设备显示屏下侧边缘的距离y1,以及视频应用窗口501的下侧边缘与第二设备显示屏上侧边缘的距离y2,其中y1+y2等于视频应用窗口501的高度H。本申请实施例以左右扩展作为示例,关于上下扩展时的具体实现过程,可以参考上下扩展时的具体实现过程。
示例性的,在视频播放界面502的一部分位于第一设备,另一部分位于第二设备上时,例如图9所示的x1>0且x2>0时,第一设备向第二设备发送的第一扩展数据中的第一视频数据具体可以包括视频播放界面502的第二部分对应的界面配置参数;第一扩展数据中的第一音频数据具体可以包括视频播放界面502对应的音频流(如图9所示的音频流I)和音量(如图9所示的音量2)。其中,第一扩展数据用于第二设备与第一设备通过拼接方式跨设备分布式显示视频播放界面502,以及同步播放视频播放界面502对应的音频。
其中,在本申请实施例中,第一设备可以通过向第二设备的音频通道发送第一扩展数据,以指示第二设备与第一设备通过拼接方式跨设备分布式显示视频播放界面502,以及同步播放视频播放界面502对应的音频。
在本申请一些实施例中,上述音量2可以由第一设备根据第一设备当前播放视频播放界面502对应的音频时的音量(如音量1)确定。例如,音量2=音量1。
在本申请另一些实施例中,第一设备还可以基于第一设备向第二设备扩展显示的扩展比例,控制第一设备和第二设备分别播放视频播放界面502对应的音频时的音量1和音量2。
其中,第一设备的音频输出控制模块420可以根据监控模块410发送的视频播放界面502的第一部分和第二部分,确定第一设备向第二设备扩展显示的扩展比例。在本申请实施例中,对于如图8所示的左右扩展的扩展模式,扩展比例是指x1/x2。对于上下扩展的扩展模式,扩展比例是指y1/y2。
如图9所示,第一设备的音频输出控制模块420可以根据第一设备向第二设备扩展显示的扩展比例计算得到第一设备播放音频流I时的音量1,以及第二设备播放音频流I时的音量2。其中,音量1/音量2=x1/x2。示例性的,假设在图5A所示的状态时,第一设备以初始音量(如音量0)播放视频播放界面502,则音量1=音量0×x1/L,音量2=音量0×x2/L,其中,L为视频应用窗口501的宽度。
S606、第一设备显示视频播放界面502的第一部分,第二设备显示视频播放界面502的第二部分;第一设备和第二设备分布式同步播放视频播放界面502对应的音频。
其中,第一设备和第二设备分布式同步播放视频播放界面502对应的音频时,如图5B所示,第一设备以音量1播放视频播放界面502对应的音频,第二设备以音量2播放视频播放界面502对应的音频。
在一些实施例中,第一设备的音频播放模块440和第二设备的音频播放模块可以以相同音量同步播放视频播放界面502对应的音频。即,音量1=音量2。
在另一些实施例中,如图9所示,在x1>0且x2>0时,第一设备的音频播放模块440和第二设备的音频播放模块可以分别以音量1和音量同步播放视频播放界面 502对应的音频。其中,音量1/音量2=x1/x2。示例性的,音量1=音量0×x1/L,音量2=音量0×x2/L,其中,L为视频应用窗口501的宽度,音量0为在图5A所示的状态时第一设备播放视频播放界面502的初始音量(如音量0)。
需要说明的是,对于第一设备上运行有多个音频会话的情况,例如图9所示第一设备上运行有应用1、应用2、……、应用N对应的音频会话(其中N为正整数,N≥3)的情况,第一设备的音频播放模块440需要以音量1播放视频播放界面502对应的音频,同时播放其他音频会话对应的音频(即图9所示的音频流1~(I-1)和音频流(I+1)~N对应的音频)。对于这种情况,如图10所示,第二设备的音频通道通过播放器(如player)设置播放音频流I时的音量2,然后通过扬声器播放该音频。第一设备的音频播放模块440通过播放器(如player)设置播放音频流I时的音量1,然后将设置了音量的音频流I与音频流1~(I-1)以及音频流(I+1)~N进行混音后,并通过扬声器播放混音后的音频流。
而对于第一设备上运行有一个音频会话(如视频播放界面502对应的音频会话)的情况,第一设备的音频输出控制模块420在确定将虚拟音频模块430过滤得到的音频会话对应的音频流(如图11所示音频流I)之后,指示第一设备的音频播放模块440和第二设备的音频播放模块分别以音量1和音量2播放该视频播放界面502对应的音频。对于这种情况,如图11所示,第二设备的音频通道通过播放器(如player)设置播放音频流I时的音量2,然后通过扬声器播放该音频。第一设备的音频播放模块440通过播放器(如player)设置播放音频流I时的音量1,然后通过扬声器直接播放该音频。
进一步的,随着用户对视频应用窗口501的持续拖动,当视频应用窗口501由图5B所示的状态变化至图5C所示的状态时,如图6所示,本申请实施例提供的一种分布式音频播放方法还可以包括以下步骤S607-S608:
S607、第一设备在检测到视频应用窗口501被拖动出第一设备显示屏边缘时,向第二设备发送第二扩展数据。第二扩展数据包括第二视频数据和第二音频数据。
其中,当视频应用窗口501由图5B所示的状态变化至图5C所示的状态时,第一设备将视频应用窗口501和视频播放界面502对应的音频彻底转移至第二设备。第二视频数据即视频播放界面502对应的视频数据,第二音频数据包括视频播放界面502对应的音频流。
进一步的,在本申请实施例中,第二扩展数据中的第二音频数据还可以包括第二设备播放视频播放界面502对应的音频流时的音量(如音量3),用于指示第二设备以音量2播放视频播放界面502。
其中,在本申请实施例中,第一设备可以通过向第二设备的音频通道发送第二扩展数据,以指示第二设备显示视频播放界面502,以及同步播放视频播放界面502对应的音频。
S608、第二设备显示视频应用窗口501,且播放显示视频应用窗口501中视频播放界面502对应的音频。
在一些实施例中,第二设备可以以默认音量播放视频播放界面502对应的音频。如,默认音量为第二设备的音频播放模块的当前设置的音量。
在另一些实施例中,如图5C和图6所示,第二设备可以以音量3播放视频播放界面502对应的音频。其中,音量3=音量0;音量0是在图5A所示的状态时,第一设备播放视频播放界面502的初始音量。
示例性的,第二设备的音频通道可以通过播放器(如player)设置播放音频流I时的音量3,然后通过扬声器播放该音频。
通过本申请实施例提供的分布式音频播放方法,第一设备可以在向第二设备扩展显示包括音频的界面的过程中,实现音频随着界面的分布式显示情况自动进行适应性切换的效果。例如,在用户还未拖动界面窗口或者还未将界面窗口拖动出第一设备显示屏范围时,仅由第一设备播放界面对应的音频。在用户将界面窗口拖动至一部分显示在第一设备显示屏范围,另一部分显示在第二设备显示屏范围的状态时,分布式同步播放界面对应的音频。在用户将界面窗口完全拖动至第二设备显示屏范围时,仅由第二设备播放界面对应的音频。通过上述方法,可以减少用户在扩展显示时的音频设置操作,提升用户体验。
进一步的,在一些实施例中,第一设备还可以在用户将界面窗口拖动至第一部分显示在第一设备显示屏范围,第二部分显示在第二设备显示屏范围的状态时,控制第一设备和第二设备同步播放界面对应的音频时各自音量的大小。例如,根据第一部分和第二部分的具体比例,随着界面窗口的移动,控制第一设备和第二设备同步播放界面对应的音频时音量的逐渐变化。如随着界面窗口由第一设备向第二设备的扩展显示,第一设备的音量越来越小,第二设备的音量越来越大。通过这样的方式,可以自动化实现音量的变化适配界面扩展的变化,已提供更好的用户体验。
在本申请的一些实施例中,第一设备和第二设备间的通道可以为一条通道,用于传输音频、传输视频、传输控制信令、传输参数等,即所有的交互数据均从这个通道进行数据交互,所有的交互数据均需要满足这个通道的数据格式。第一设备和第二设备间的通道也可以为多条通道,分别用于传输音频、传输视频、传输控制信令、传输参数等中的一个或多个。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (12)

  1. 一种分布式音频播放方法,其特征在于,所述方法包括:
    第一设备显示第一应用窗口,所述第一设备播放所述第一应用窗口对应的第一音频;
    响应于第一用户操作,所述第一设备显示所述第一应用窗口的第一部分,第二设备接收所述第一设备发送的视频数据,根据所述视频数据显示所述第一应用窗口的第二部分,此时所述第一设备播放所述第一音频,所述第二设备播放所述第一音频;
    响应于第二用户操作,所述第二设备接收所述第一设备发送的所述视频数据,根据所述视频数据显示所述第一应用窗口,此时所述第二设备播放所述第一音频。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一设备监测所述第一应用窗口在所述第一设备对应的显示区域中的位置,根据所述第一应用窗口的位置判断是否发送所述视频数据至所述第二设备;
    当所述第一应用窗口在所述第一设备对应的显示区域时,不发送所述视频数据至所述第二设备;
    当所述第一应用窗口的第一部分在所述第一设备对应的显示区域,所述第一应用窗口的第二部分在所述第二设备对应的显示区域时,发送所述视频数据至所述第二设备;
    当所述第一应用窗口在所述第二设备对应的显示区域时,发送所述视频数据至所述第二设备。
  3. 根据权利要求1或2任一项所述的方法,其特征在于,所述方法还包括:
    当所述第一应用窗口的音频状态为正在播放状态时,所述第一设备发送所述第一音频至所述第二设备;
    当所述第一应用窗口的音频状态为暂停播放、静音、退出播放状态中的至少一项时,不发送所述第一音频至所述第二设备。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备和所述第二设备以相同的音量播放所述第一音频。
  5. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备以第一音量播放所述第一音频,所述第二设备以第二音量播放所述第一音频,所述第一音量和所述第二音量为所述第一设备根据所述第一应用窗口的位置得到的。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备根据所述第一应用窗口的标识,从音频会话列表中获取和所述第一应用窗口关联的音频流,根据所述第一应用窗口关联的音频流生成所述第一音频。
  7. 根据权利要求6所述的方法,其特征在于,所述第一应用窗口的标识包括窗体标识、游览器标识和路径中的至少一项。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述第一应用窗口对应的第一应用运行在所述第一设备上。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,所述第一用户操作和所述第二用户操作为拖动所述第一应用窗口的操作。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述第一设备为笔记本电脑,所述第二设备为平板电脑。
  11. 一种电子设备,其特征在于,包括:一个或多个处理器,一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器在执行所述计算机指令时,使得所述电子设备执行如权利要求1至10中任一项所述的方法。
  12. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,执行如权利要求1至10中任一项所述的方法。
PCT/CN2021/140181 2021-02-28 2021-12-21 一种分布式音频播放方法及电子设备 WO2022179273A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/547,985 US20240126505A1 (en) 2021-02-28 2021-12-21 Distributed Audio Playing Method and Electronic Device
EP21927697.9A EP4280042A1 (en) 2021-02-28 2021-12-21 Distributed audio playing method, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110222215.8 2021-02-28
CN202110222215.8A CN114968165A (zh) 2021-02-28 2021-02-28 一种分布式音频播放方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022179273A1 true WO2022179273A1 (zh) 2022-09-01

Family

ID=82974044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/140181 WO2022179273A1 (zh) 2021-02-28 2021-12-21 一种分布式音频播放方法及电子设备

Country Status (4)

Country Link
US (1) US20240126505A1 (zh)
EP (1) EP4280042A1 (zh)
CN (1) CN114968165A (zh)
WO (1) WO2022179273A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070124503A1 (en) * 2005-10-31 2007-05-31 Microsoft Corporation Distributed sensing techniques for mobile devices
CN104137048A (zh) * 2011-12-28 2014-11-05 诺基亚公司 提供应用的打开实例
CN110383234A (zh) * 2019-02-20 2019-10-25 深圳市鹰硕技术有限公司 一种投屏方法、装置和系统、智能终端和存储介质
CN111913628A (zh) * 2020-06-22 2020-11-10 维沃移动通信有限公司 分享方法、装置和电子设备
CN112083867A (zh) * 2020-07-29 2020-12-15 华为技术有限公司 一种跨设备的对象拖拽方法及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070124503A1 (en) * 2005-10-31 2007-05-31 Microsoft Corporation Distributed sensing techniques for mobile devices
CN104137048A (zh) * 2011-12-28 2014-11-05 诺基亚公司 提供应用的打开实例
CN110383234A (zh) * 2019-02-20 2019-10-25 深圳市鹰硕技术有限公司 一种投屏方法、装置和系统、智能终端和存储介质
CN111913628A (zh) * 2020-06-22 2020-11-10 维沃移动通信有限公司 分享方法、装置和电子设备
CN112083867A (zh) * 2020-07-29 2020-12-15 华为技术有限公司 一种跨设备的对象拖拽方法及设备

Also Published As

Publication number Publication date
CN114968165A (zh) 2022-08-30
EP4280042A1 (en) 2023-11-22
US20240126505A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
CN113553014B (zh) 多窗口投屏场景下的应用界面显示方法及电子设备
WO2021052147A1 (zh) 一种数据传输的方法及相关设备
WO2022100237A1 (zh) 投屏显示方法及相关产品
WO2022052773A1 (zh) 多窗口投屏方法及电子设备
WO2021036651A1 (zh) 一种显示方法及电子设备
WO2022042656A1 (zh) 一种界面显示方法及设备
WO2021121052A1 (zh) 一种多屏协同方法、系统及电子设备
US20230229300A1 (en) Cross-Device Object Drag Method and Device
WO2022127632A1 (zh) 一种资源管控方法及设备
US11947998B2 (en) Display method and device
WO2022083465A1 (zh) 电子设备的投屏方法及其介质和电子设备
CN114741008B (zh) 分布式跨设备协同方法、电子设备及通信系统
WO2020078297A1 (zh) 一种冻屏处理方法及终端
US20240192844A1 (en) Control content drag method and system, and electronic device
WO2022048453A1 (zh) 解锁方法及电子设备
US20230362782A1 (en) Data Sharing Method, Electronic Device, and System
CN115016697A (zh) 投屏方法、计算机设备、可读存储介质和程序产品
WO2022206848A1 (zh) 一种应用小部件的显示方法及设备
WO2022179273A1 (zh) 一种分布式音频播放方法及电子设备
WO2022143310A1 (zh) 一种双路投屏的方法及电子设备
WO2022161024A1 (zh) 升级提示方法、终端设备及计算机可读存储介质
WO2024104131A1 (zh) 一种多窗口场景下的窗口获焦方法、设备及系统
WO2023246290A1 (zh) 一种功耗控制方法及电子设备
WO2023071590A1 (zh) 输入控制方法及电子设备
WO2023174322A1 (zh) 图层处理方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927697

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18547985

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2021927697

Country of ref document: EP

Effective date: 20230816

NENP Non-entry into the national phase

Ref country code: DE