WO2020034227A1 - Procédé de synchronisation de contenu multimédia et dispositif électronique - Google Patents

Procédé de synchronisation de contenu multimédia et dispositif électronique Download PDF

Info

Publication number
WO2020034227A1
WO2020034227A1 PCT/CN2018/101202 CN2018101202W WO2020034227A1 WO 2020034227 A1 WO2020034227 A1 WO 2020034227A1 CN 2018101202 W CN2018101202 W CN 2018101202W WO 2020034227 A1 WO2020034227 A1 WO 2020034227A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
clock synchronization
text information
time
interface
Prior art date
Application number
PCT/CN2018/101202
Other languages
English (en)
Chinese (zh)
Inventor
赵朋
张志民
印凤行
马亮亮
王斌
邹晓磊
徐永攀
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/101202 priority Critical patent/WO2020034227A1/fr
Priority to CN201880072195.7A priority patent/CN111345010B/zh
Publication of WO2020034227A1 publication Critical patent/WO2020034227A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications

Definitions

  • the present application relates to the technical field of terminals, and in particular, to a multimedia content synchronization method and an electronic device.
  • K song is gradually moving into people's daily life, and the application software with K song function is also increasingly abundant.
  • the application software with the K song function is a combination of a music player and recording software, which can play the original vocal, or record the user's song, and mix the recorded song with the accompaniment to obtain a music file.
  • the user can also convert the above Music files are uploaded to the Internet so that more people can hear their songs.
  • the present application provides a multimedia content synchronization method and an electronic device, so as to realize that the lyrics played by the first electronic device can be synchronized with the music accompaniment played by the second electronic device, thereby providing users with a better singing experience.
  • an embodiment of the present application provides a multimedia content synchronization method.
  • the method is applicable to a first electronic device.
  • the method includes: the first electronic device establishes a network connection with the second electronic device, and when the first electronic device A first operation of the user is detected, and in response to the first operation, the first electronic device displays a first interface and sends an instruction to play an audio file to the second electronic device, wherein the first interface includes text information, the text information and The audio files are associated, and then the first electronic device adjusts the text information according to a network delay between the first electronic device and the second electronic device, so that the text information is played in synchronization with the audio file.
  • the first electronic device adjusts the text information according to the network delay, and finally realizes synchronous playback of the text information and the audio file, thereby providing a better singing experience to the user.
  • the first electronic device before detecting the first operation of the user, sends a first clock synchronization request to the second electronic device, and then the first electronic device receives the first clock sent by the second electronic device A synchronization response, wherein the first clock synchronization response includes a time when the second electronic device receives the first clock synchronization request and a time when the second electronic device sends the first clock synchronization response;
  • the sending time at which the first electronic device sends the first clock synchronization request, the receiving time at which the first electronic device receives the first clock synchronization response, and the receiving time at which the second electronic device receives the first clock synchronization request And the sending time at which the second electronic device sends the first clock synchronization response, to determine a network delay between the first electronic device and the second electronic device.
  • the first electronic device may further be based on a sending time when the first electronic device sends the first clock synchronization request, and a receiving time when the first electronic device receives the first clock synchronization response. And the receiving time when the second electronic device receives the first clock synchronization request and the sending time when the second electronic device sends the first clock synchronization response, determining the first electronic device and the second electronic device Clock difference between devices. If the clock difference is not zero, the text information is adjusted according to a network delay and a clock difference between the first electronic device and the second electronic device.
  • the first electronic device may receive a scheduled playback instruction sent by the second electronic device, because the scheduled playback instruction includes that the second electronic device starts to play the preset playback time after a preset waiting time after the first moment.
  • the audio file is described.
  • the first electronic device may determine the first duration according to the network delay, the first time, and the preset waiting time between the first electronic device and the second electronic device, and then according to receiving the appointment Adjusting the text information at the second moment of the playback instruction and the first duration.
  • the first electronic device receives audio file playback progress information sent by the second electronic device, and then adjusts the text information according to network delay and audio file playback progress information.
  • the first electronic device may also establish a network connection with the third electronic device.
  • the first electronic device receives the recorded first
  • the second sound information recorded by the first electronic device is sent to the second electronic device together with the first sound information, so that the second electronic device plays the second sound information and the first sound information.
  • an embodiment of the present application provides a multimedia content synchronization method.
  • the method is applicable to a third electronic device.
  • the method includes: the third electronic device establishes a network connection with the first electronic device, and then the third electronic device Receiving an instruction for playing text information sent by the first electronic device, the instruction including progress information of the text information currently being played by the first electronic device; and finally, the third electronic device according to a network between the first electronic device and the third electronic device The time delay and the progress information, and adjusting the text information of the third electronic device, so that the text information of the third electronic device is played in synchronization with the text information of the first electronic device.
  • the third electronic device adjusts the text information according to the network delay, and finally realizes synchronous playback of the text information between the third electronic device and the first electronic device, thereby providing a better singing experience to the user.
  • the third electronic device may further send a second clock synchronization request to the first electronic device, and then receive a second clock synchronization response sent by the first electronic device, because the clock synchronization response includes the third electronic device
  • the time when the device receives the clock synchronization request and the time when the first electronic device sends the clock synchronization response so the third electronic device sends the second clock synchronization request according to the sending time of the second electronic device and the first
  • the receiving time at which the three electronic devices receive the clock synchronization response, the receiving time at which the first electronic device receives the clock synchronization request, and the sending time at which the first electronic device sends the clock synchronization response determine the first A network delay between the electronic device and the third electronic device.
  • the third electronic device may further according to a sending time when the third electronic device sends the second clock synchronization request and a receiving time when the third electronic device receives the clock synchronization response, And a receiving time at which the first electronic device receives a clock synchronization request, and a sending time at which the first electronic device sends the clock synchronization response, to determine a clock difference between the first electronic device and the third electronic device . If the clock difference is not zero, the third electronic device is based on the network delay between the first electronic device and the third electronic device, and the clock difference between the first electronic device and the third electronic device. The text information currently played by the first electronic device is adjusted, and the text information is adjusted.
  • an embodiment of the present application provides a method for synchronizing multimedia content.
  • the method is applicable to a second electronic device.
  • the method includes: the second electronic device establishes a network connection with the first electronic device, and then the second electronic device Receive an instruction to play an audio file sent by the first electronic device, the audio file is associated with text information, and then the second electronic device sends a scheduled playback instruction to the first electronic device, because the scheduled playback instruction includes the second
  • the electronic device starts playing the audio file after a preset waiting time after the fourth time, so that the first electronic device is based on the network delay between the first electronic device and the second electronic device, the fourth time And adjusting the text information by the preset waiting time.
  • the second electronic device can implement playing the text information and the audio file in synchronization with the first electronic device, thereby improving the user experience.
  • an embodiment of the present application provides a multimedia content synchronization method.
  • the method is applicable to a second electronic device.
  • the method includes: the second electronic device establishes a network connection with the first electronic device, and then the second electronic device Receiving the instruction of the first electronic device to send the audio file, because the audio file is associated with the text information, the second electronic device sends the playback progress information of the audio file to the first electronic device, so that the first electronic device according to the first electronic device and Adjust the text information by the network delay between the second electronic device and the audio file playback progress information.
  • the second electronic device can implement playing the text information and the audio file in synchronization with the first electronic device, thereby improving the user experience.
  • an embodiment of the present application provides an electronic device including a processor and a memory.
  • the memory is used to store one or more computer programs; when the one or more computer programs stored in the memory are executed by a processor, the electronic device can implement any one of the possible design methods of the foregoing aspects.
  • an embodiment of the present application further provides an apparatus, which includes a module / unit that executes any one of the possible design methods of the foregoing aspects.
  • modules / units can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
  • an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and when the computer program is run on an electronic device, the electronic device is caused to execute any of the foregoing aspects Any of the possible design methods.
  • an embodiment of the present application further provides a method including a computer program product, when the computer program product runs on a terminal, causing the electronic device to execute any one of the possible designs of the foregoing aspects.
  • FIG. 1 is a schematic diagram of an interconnection scenario according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an Android operating system according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a K song scene provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a group of interfaces according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another K song scene provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a smart speaker according to an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 16-1 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • 16-2 is a schematic diagram of another group of interfaces provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of another K song scene provided by an embodiment of the present application.
  • FIG. 18 is a schematic flowchart of a multimedia content synchronization method according to an embodiment of the present application.
  • 21 is a flowchart of another synchronization process according to an embodiment of the present application.
  • 22 is a schematic diagram of another group of interfaces according to an embodiment of the present application.
  • FIG. 23 is a schematic structural diagram of a mobile phone and a smart speaker according to an embodiment of the present application.
  • FIG. 24 is a schematic diagram of another K song implementation process provided by an embodiment of the present application.
  • FIG. 25 is a schematic diagram of another K song implementation process provided by an embodiment of the present application.
  • FIG. 26a is a schematic flowchart of another multimedia content synchronization method according to an embodiment of the present application.
  • 26b is a schematic diagram of another K song implementation process provided by an embodiment of the present application.
  • FIG. 27 is a schematic diagram of another K song implementation process provided by an embodiment of the present application.
  • FIG. 28 is a schematic diagram of another K song implementation process provided by an embodiment of the present application.
  • FIG. 29 is a schematic diagram of another K song scene provided by an embodiment of the present application.
  • FIG. 30 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 31 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
  • FIG. 32 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 33 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
  • Multimedia is the integration of multiple media, generally including text, sound, and images.
  • K song also known as karaoke
  • Singers can participate in singing with pre-recorded music accompaniment. It can beautify and retouch the singer's voice through sound processing. When it is organically combined with music accompaniment, it becomes an integrated stereo song. This style of accompaniment brings great convenience and joy to singing lovers. It is a way for people to entertain and entertain and is loved by the masses.
  • the multimedia content synchronization method provided in the embodiment of the present application can be applied to a scenario where multiple electronic devices 100 shown in FIG. 1 are interconnected based on a communication network.
  • the communication network may be a local area network or a wide area network transferred through a relay device.
  • the communication network may be a short distance communication network such as a wifi hotspot network, a wifi P2P network, a Bluetooth network, a zigbee network, or a near field communication (NFC) network.
  • the communication network may be a 3rd-generation wireless communication technology (3G) network, a 4th generation mobile communication technology (4G) ) Networks, 5th-generation mobile communication technology (5G) networks, public land mobile networks (PLMN) or the Internet that are evolving in the future.
  • 3G 3rd-generation wireless communication technology
  • 4G 4th generation mobile communication technology
  • 5G 5th-generation mobile communication technology
  • PLMN public land mobile networks
  • different electronic devices can exchange data through a communication network, such as interactive pictures, text, and videos, or results of interactive electronic devices processing objects such as pictures, text, or videos.
  • the electronic device 100 shown in FIG. 1 may be a portable electronic device that further includes other functions such as a personal digital assistant and / or a music player function, such as a mobile phone, a tablet computer, and a wireless communication function Wearable devices (such as smart watches).
  • portable electronic devices include, but are not limited to, carrying Or portable electronic devices with other operating systems.
  • the aforementioned portable electronic device may also be other portable electronic devices, such as a laptop computer having a touch-sensitive surface (eg, a touch panel), or the like.
  • the electronic device 100 may not be a portable electronic device, but a desktop computer with a touch-sensitive surface (such as a touch panel).
  • the embodiment is specifically described below by taking the electronic device 100 as an example.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, and a wireless communication module 160 , Audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, pointer 192, camera 193, display 194, and SIM card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, and a wireless communication module 160 , Audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, pointer 192, camera 193, display 194, and SIM card interface 195, etc
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer parts than shown, or some parts may be combined, or some parts may be split, or different parts may be arranged.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image, signal processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural network processing unit (NPU) Wait.
  • AP application processor
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image, signal processor, ISP
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural network processing unit
  • different processing units may be independent devices or integrated in one or more processors.
  • the controller may be a nerve center and a command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals, and complete the control of fetching and executing instructions.
  • the processor 110 may further include a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory. This memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit (inter-integrated circuit, sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver receiver / transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and / Or universal serial bus (universal serial bus, USB) interface.
  • I2C integrated circuit
  • I2S integrated circuit
  • PCM pulse code modulation
  • UART universal asynchronous transceiver receiver / transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be respectively coupled to a touch sensor 180K, a charger, a flash, a camera 193, and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to implement a function of receiving a call through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing, and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement the function of receiving calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus for asynchronous communication.
  • the bus may be a two-way communication bus. It converts the data to be transferred between serial and parallel communications.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with a Bluetooth module in the wireless communication module 160 through a UART interface to implement a Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to implement a function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display 194, the camera 193, and the like.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), and the like.
  • CSI camera serial interface
  • DSI display serial interface
  • the processor 110 and the camera 193 communicate through a CSI interface to implement a shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to implement a display function of the electronic device 100.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and a peripheral device. It can also be used to connect headphones and play audio through headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes or a combination of multiple interface connection modes in the above embodiments.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive a charging input of a wired charger through a USB interface.
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. While the charge management module 140 is charging the battery 142, the power management module 141 can also provide power to the electronic device.
  • the power management module 141 is used to connect the battery 142, the charge management module 140 and the processor 110.
  • the power management module 141 receives inputs from the battery 142 and / or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, number of battery cycles, battery health (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charge management module 140 may be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna module 1, the antenna module 2 mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
  • the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals.
  • Each antenna in the electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve antenna utilization. For example, a cellular network antenna can be multiplexed into a wireless LAN diversity antenna. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied on the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like.
  • the mobile communication module 150 may receive the electromagnetic wave by the antenna 1, and perform filtering, amplification, and other processing on the received electromagnetic wave, and transmit it to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is configured to modulate a low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be a separate device.
  • the modem processor may be independent of the processor 110 and disposed in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (WLAN), Bluetooth (Bluetooth, BT), global navigation satellite system (GNSS), frequency modulation (frequency modulation) FM), near field communication technology (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices that integrate at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it into electromagnetic wave radiation through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (GSM), a general packet radio service (GPRS), a code division multiple access (CDMA), and broadband. Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and / or IR technology.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS)) and / or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing and is connected to the display 194 and an application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt LCD (liquid crystal display), OLED (organic light-emitting diode), active matrix organic light-emitting diode or active-matrix organic light-emitting diode emitting diodes (AMOLED), flexible light-emitting diodes (FLEDs), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (QLEDs), etc.
  • the electronic device 100 may include one or N display screens, where N is a positive integer greater than 1.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP processes the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, and the light is transmitted to the light receiving element of the camera through the lens. The light signal is converted into an electrical signal, and the light receiving element of the camera passes the electrical signal to the ISP for processing and converts the image to the naked eye. ISP can also optimize the image's noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, an ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • An object generates an optical image through a lens and projects it onto a photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs digital image signals to the DSP for processing.
  • DSP converts digital image signals into image signals in standard RGB, YUV and other formats.
  • the electronic device 100 may include one or N cameras, where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform a Fourier transform on the frequency point energy and the like.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent recognition of the electronic device 100, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, save music, videos and other files on an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121.
  • the memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.) and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the memory 121 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
  • UFS universal flash memory
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the audio module 170 is configured to convert digital audio information into an analog audio signal and output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called a "horn" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as the "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, it can answer the voice by holding the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound through the mouth near the microphone 170C, and input a sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C.
  • the electronic device 100 may be provided with two microphones, in addition to collecting sound signals, it may also implement a noise reduction function.
  • the electronic device 100 may also be provided with three, four or more microphones to achieve sound signal collection, noise reduction, identification of sound sources, and directional recording.
  • the headset interface 170D is used to connect a wired headset.
  • the earphone interface can be a USB interface or a 3.5mm open mobile electronic platform (OMTP) standard interface, and the American Cellular Telecommunications Industry Association (of the United States, CTIA) standard interface.
  • OMTP open mobile electronic platform
  • CTIA American Cellular Telecommunications Industry Association
  • the pressure sensor 180A is used to sense a pressure signal, and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be disposed on the display screen 194.
  • the capacitive pressure sensor may be at least two parallel plates having a conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity lower than the first pressure threshold is applied to the short message application icon, an instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction for creating a short message is executed.
  • the gyro sensor 180B may be used to determine a movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, the x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the angle of the electronic device 100 shake, and calculates the distance that the lens module needs to compensate according to the angle, so that the lens cancels the shake of the electronic device 100 through the backward movement to achieve image stabilization.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the barometric pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C, and assists in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip leather case by using the magnetic sensor 180D.
  • the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the opened and closed state of the holster or the opened and closed state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to recognize the posture of electronic devices, and is used in applications such as switching between horizontal and vertical screens, and pedometers.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light through a light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from a nearby object. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficiently reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100.
  • the electronic device 100 may use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and the pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • Ambient light sensor 180L can also be used to automatically adjust white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 may use the collected fingerprint characteristics to realize fingerprint unlocking, access application lock, fingerprint photographing, fingerprint answering an incoming call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 executes a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 performs a performance reduction of a processor located near the temperature sensor 180J so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid the abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 when the temperature is lower than another threshold, performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch panel”. Can be set on display 194. Used to detect touch operations on or near it. The detected touch operation may be passed to the application processor to determine the type of touch event, and a corresponding visual output is provided through the display screen 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire a vibration signal of a human voice oscillating bone mass.
  • Bone conduction sensor 180M can also contact the human pulse and receive blood pressure beating signals.
  • the bone conduction sensor 180M may also be provided in the headset.
  • the audio module 170 may analyze a voice signal based on the vibration signal of the oscillating bone mass of the vocal part obtained by the bone conduction sensor 180M to implement a voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement a heart rate detection function.
  • the keys 190 include a power-on key, a volume key, and the like.
  • the keys can be mechanical keys. It can also be a touch button.
  • the electronic device 100 may receive a key input, and generate a key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration alert.
  • the motor 191 can be used for vibration alert for incoming calls, and can also be used for touch vibration feedback.
  • the touch operation applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to a subscriber identity module (SIM).
  • SIM subscriber identity module
  • the SIM card can be contacted and separated from the electronic device 100 by inserting or removing the SIM card interface.
  • the electronic device 100 may support one or N SIM card interfaces, and N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple SIM cards can be inserted into the same SIM card interface at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 may also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through a SIM card to implement functions such as calling and data communication.
  • the electronic device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present invention takes the layered architecture Android system as an example, and exemplifies the software structure of the electronic device 100.
  • FIG. 3 is a software structural block diagram of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, each of which has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, which are an application layer, an application framework layer, an Android runtime and system libraries, and a kernel layer from top to bottom.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, SMS, etc.
  • the application framework layer provides an application programming interface (API) and a programming framework for applications at the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, and so on.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can consist of one or more views.
  • the display interface including the SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide a communication function of the electronic device 100. For example, management of call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages that can disappear automatically after a short stay without user interaction.
  • the notification manager is used to inform download completion, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • text messages are displayed in the status bar, sounds are emitted, electronic equipment vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that the Java language needs to call, and the other is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • Virtual machines are used to perform object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (media manager), media library (Media library), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL) and so on.
  • surface manager media manager
  • media library Media library
  • Three-dimensional graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio and video formats for playback and recording, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • the 2D graphics engine is a graphics engine for 2D graphics.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 100 by way of example in conjunction with capturing a photographing scene.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, time stamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch and click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. The camera captures still images or videos.
  • the following embodiments can all be implemented in the electronic device 100 having the above-mentioned hardware structure.
  • the following embodiment will take the electronic device 100 as an example to describe the multimedia content synchronization method provided by the embodiment of the present application.
  • the multimedia content synchronization method provided in the embodiment of the present application can enable multiple electronic devices connected to each other to display the lyrics synchronized with the accompaniment music when the electronic device plays the accompaniment music.
  • the electronic device may be a smart device such as a mobile phone, a smart speaker, or a smart TV.
  • the smart speaker is an upgrade of the traditional speaker.
  • the user can control the smart speaker through voice commands to implement functions such as on-demand songs and weather forecasting.
  • the electronic device 1 mainly taking the interconnection between the electronic device 1 and the smart speaker as an example, the electronic device 1 establishes a network connection with the smart speaker in the room.
  • the electronic device 1 detects the user's K song request operation, the electronic device 1 Send an instruction to play the audio file to the smart speaker, the instruction may include an audio file, or may include access link information of the audio file.
  • the electronic device 1 also stores text information related to the audio file, such as lyrics.
  • the electronic device 1 adjusts the text information according to the network delay with the smart speaker.
  • the text information displayed in the electronic device 1 is synchronized with the audio file in the smart speaker.
  • the electronic device 1 displays an interface as shown in FIG. 5b, which includes two options of a single mode and a chorus mode.
  • FIG. 5b when the electronic device 1 detects an operation (for example, clicking) performed by the user 1 on the single-player mode 502 in FIG. 5b, the electronic device 1 displays an interface as shown in FIG. 5c.
  • the interface includes two favorites of the user 1
  • the entries of the song are kiss farewell and red beans.
  • the electronic device 1 detects an operation (for example, clicking) performed by the user 1 on the farewell option 503 in FIG. 5c
  • the electronic device 1 displays the interface shown in FIG. 5d, which scrolls the lyrics, reminds the progress of the lyrics, and the smart speaker
  • the playback progress of the accompaniment audio file is synchronized.
  • the electronic device 1 and the electronic device 2 establish a network connection with the smart speaker in the room.
  • the electronic device 1 detects a K song request operation input by the user 1, the electronic device 1 sends a smart speaker to play the audio file.
  • the instruction may include an audio file, or may include access link information of the audio file.
  • Clock synchronization is performed between the electronic device 1, the electronic device 2 and the smart speaker.
  • the smart speaker plays an audio file, the electronic device 1 and the electronic device 2 adjust the lyrics according to the network delay, so that the lyrics displayed in the electronic device 1 and the electronic device 2 The reminder progress is synchronized with the playback progress of the accompaniment audio file in the smart speaker.
  • the electronic device 1 when the electronic device 1 detects an operation (for example, clicking) performed by the user 1 on the chorus mode 701 in FIG. 7b, the electronic device 1 displays an interface as shown in FIG. 7b.
  • the interface includes two options for creating and joining.
  • the electronic device 1 when the electronic device 1 detects an operation (for example, clicking) performed by the user 1 on the creation 702 in FIG. 7b, the electronic device 1 displays an interface as shown in FIG. 7c. Huawei P20, and displays the QR code information of K song group created by electronic device 1.
  • the electronic device 1 displays an interface as shown in FIG. 7d, which displays that the friend is waiting to join the K song group.
  • the device name corresponding to device 2 is Apple 6S.
  • the electronic device 1 displays an interface as shown in FIG. 7e.
  • the interface in Figure 7e includes: There are two members in the K song group, and the device names are Huawei P20 and Apple 6S.
  • the favorite songs include two song entries, which are kiss farewell and red beans.
  • the embodiment of the present application also provides a way for the electronic device 1 to invite the electronic device 2 to join the K song group.
  • the electronic device 1 detects an operation (such as clicking) performed by the user 1 on the creation 801 in FIG. 8 a, referring to FIG. 8 a, the electronic device 1 displays an interface as shown in FIG. 8 b.
  • the interface shows that the electronic device 1 is searching for nearby electronic devices, and the electronic device 2 has been searched.
  • the device name of the electronic device 2 is Apple 6s.
  • the electronic device 1 detects that the user 1 acts on the avatar 802 corresponding to the Apple 6s, the electronic device 1 sends a request message to the electronic device 2 inviting to join the K song group.
  • the manner in which the electronic device 1 searches for nearby electronic devices may be, but is not limited to, network connection methods such as Bluetooth, wifi hotspot, wifi direct connection, or NFC.
  • network connection methods such as Bluetooth, wifi hotspot, wifi direct connection, or NFC.
  • the user 1 may enter a setting menu to open a network type of wifi hotspot, and search for nearby electronic devices through the wifi hotspot.
  • user 1 can also choose to open the wifi hotspot network type through the shortcut function control in the notification bar.
  • the electronic device 1 displays an interface as shown in FIG. 8c, which shows that the device name corresponding to the electronic device 2 is waiting for a friend to join the K song group. For Apple 6S.
  • the electronic device 1 displays an interface as shown in FIG. 8d.
  • the interface in Figure 8d includes: There are two members in the K song group, and the device names are Huawei P20 and Apple 6S.
  • the favorite songs include two song entries, which are kiss farewell and red beans.
  • FIG. 9 when the electronic device 1 detects the operation of the K song 901 control in the option of the farewell option in the interface shown in FIG. 7e or FIG. 8d by the electronic device 1, the electronic device 1 Device 1 displays the interface as shown in Figure 9b, and electronic device 2 displays the interface as shown in Figure 9c.
  • the content shown in Figures 9b and 9c is the same, both of which include two members in the K song group, and the device name They are Huawei P20 and Apple 6S, respectively, and also include the song title "Kiss Farewell", as well as the playback progress and lyrics.
  • the lyrics reminders in Figure 9b and Figure 9c have the same progress, and both remind me to "me and you".
  • FIG. 9 the electronic device 1 detects the operation of the K song 901 control in the option of the farewell option in the interface shown in FIG. 7e or FIG. 8d by the electronic device 1
  • the electronic device 1 Device 1 displays the interface as shown in Figure 9b
  • electronic device 2 displays the interface as shown in Figure 9c
  • FIG. 9b and FIG. 9c further include a repeat play control 902, a microphone control 903, and an end control 904.
  • the microphone control 904 When the electronic device 1 detects the sound of the user 1, the microphone control 904 will have a sound wave change to remind the user 1 that the user's audio is being collected. And, when the electronic device 2 detects the sound of the user 2, the microphone control 904 will have a sound wave change to remind the user 2 that the user's audio is being collected.
  • the electronic device 1 detects that the user acts on the end control 904, it ends and holds the audio file of the song.
  • this application provides another embodiment.
  • the electronic device 2 responds to the user 2's operation (for example, clicking) on the K song application 1001 shown in FIG. 10a, and displays an interface as shown in FIG. 10b.
  • the interface includes two options, single mode and chorus mode.
  • the electronic device 2 detects an operation (for example, clicking) performed by the user 2 on the chorus mode 1002 in FIG. 10b, the electronic device 2 displays an interface as shown in FIG. 10c.
  • the interface includes creating and joining the two items. Options.
  • the electronic device 2 when the electronic device 2 detects that the user 2 acts on the 1003 in FIG. 10b (for example, click), the electronic device 2 displays an interface as shown in FIG.
  • FIG. 10d which shows that the electronic device 2 is scanning the electronic device 1
  • the electronic device 2 displays an interface as shown in FIG. 10e.
  • the interface includes two members of the K song group.
  • the device names are Huawei P20 and Apple. 6S, where "waiting for the main mobile phone to start playing" displayed on the interface indicates that the electronic device 2 is waiting for the instruction of the electronic device 1 to start playing.
  • both electronic device 2 and electronic device 1 display the interface shown in FIG. 10f.
  • the interface includes two members of the K song group, and the device names are Huawei P20 and Apple. 6S also includes the song title "Kiss Farewell", as well as the playback progress and lyrics. Among them, the electronic device 1 and electronic device 2 have the same lyrics reminder progress, and both remind me to "me and you”.
  • the electronic device 2 when the electronic device 2 detects that the user 2 acts on the operation of adding 1101 in FIG. 11 a (such as clicking), the electronic device 2 displays an interface as shown in FIG. 11 b.
  • the interface shows that the electronic device 2 is searching for the electronic device of the accessory, and the electronic device 1 has been searched.
  • the device name of the electronic device 1 is Huawei P20.
  • the electronic device 2 detects that the user 2 acts on the avatar 1102 corresponding to the Huawei P20, the electronic device 2 sends a request message to the electronic device 1 requesting to join the K song group.
  • the manner in which the electronic device 2 searches for nearby electronic devices may be, but is not limited to, network connection methods such as Bluetooth, wifi hotspot, wifi direct connection, or NFC.
  • network connection methods such as Bluetooth, wifi hotspot, wifi direct connection, or NFC.
  • the user 2 may enter the setting menu of the electronic device 2 to open the network type of wifi hotspot, and search for nearby electronic devices through the wifi hotspot.
  • user 2 can also choose to open the wifi hotspot network type through the shortcut function control in the notification bar.
  • the electronic device 2 displays an interface as shown in FIG. 11c, which includes two members of the K song group, and the device names are Huawei P20 and Apple 6S, where "waiting for the main mobile phone to start playing" displayed on the interface indicates that the electronic device 2 is waiting for the instruction of the electronic device 1 to start playing.
  • the electronic device 1 detects the K song command from user 1
  • both electronic device 2 and electronic device 1 display the interface shown in Figure 11d.
  • This interface includes two members of the K song group.
  • the device names are Huawei P20 and Apple. 6S also includes the song title "Kiss Farewell", as well as the playback progress and lyrics. Among them, the electronic device 1 and electronic device 2 have the same lyrics reminder progress, and both remind me to "me and you”.
  • the electronic device 1 obtains the voice of the user 1, generates audio information of the user 1, and sends the audio information to the smart speaker for playback.
  • the electronic device 2 acquires the voice of the user 2, generates audio information of the user 2, and sends the audio information to the smart speaker for playback.
  • the electronic device 1 detects that the user 1 acts on the microphone control 1201 in the interface shown in FIG. 12a
  • the electronic device 1 sends a playback pause instruction to the smart speaker
  • the electronic device 1 displays as shown in FIG. 12b Interface
  • the interface switches from the microphone control to the pause control 1202, and the lyrics stop scrolling.
  • the electronic device 2 displays and displays the interface shown in FIG. 12c, and the display content of FIG. 12c is the same as that of FIG. 12b, that is, the electronic device 2 also stops scrolling the lyrics.
  • the electronic device 1 and the electronic device 2 both display the interface shown in FIG. 11d
  • the electronic device 2 detects that the user 2 acts on the interface shown in FIG. 13a
  • the electronic device 2 displays an interface as shown in FIG. 13b.
  • the microphone control is switched to the pause control 1302, and the lyrics stop scrolling.
  • the electronic device 1 still displays the interface shown in FIG. 13c, that is, the electronic device 1 normally displays the lyrics reminder.
  • the electronic device 2 may also send a stop playback instruction to the smart speaker. Then, the electronic device Both 1 and electronic device 2 display the interface as shown in Figure 13b.
  • the smart speaker when the smart speaker detects that the user 1 or the user 2 acts on the pause control 1401 on the smart speaker control panel, the smart speaker sends a playback pause instruction to the electronic device 1 and the electronic device 2 After receiving the playback pause instruction, the electronic device 1 and the electronic device 2 both stop scrolling the lyrics and display the interface as shown in FIG. 13b.
  • the electronic device 1 mixes the sounds of the user 1 and the user 2 and the accompanying music, and saves the recorded audio.
  • the audio file includes audio information of user 1 or audio information of user 1 and user 2 chorus.
  • the electronic device 1 detects that the user acts on the sharing control 1501 on the interface shown in FIG. 15a, the electronic device 1 displays the interface shown in FIG. 15b, which includes multiple sharing methods, such as WeChat sharing, QQ sharing, Bluetooth sharing, WiFi hotspot sharing, or saving access links to the cloud.
  • multiple sharing methods such as WeChat sharing, QQ sharing, Bluetooth sharing, WiFi hotspot sharing, or saving access links to the cloud.
  • the electronic device 1 when the electronic device 1 detects the operation of the user 1 acting on the WeChat share 1502 control in the interface shown in FIG. 15b, the electronic device 1 displays the interface shown in FIG. 15c, and the user 1 operates the interface. Selecting the control of a friend, the electronic device 1 displays an interface shown in FIG. 15d.
  • the interface shows that the user 1 selects three persons: brother, mother, and father.
  • the electronic device 1 sends the above audio file to Brother, mom, and dad are the corresponding electronic devices.
  • the electronic device 1 may share an audio file obtained by mixing the voice of the user 1 and the sound of the user 2 and the accompanying music, or an audio file obtained by mixing the voice of the user 1 and the accompanying music.
  • the electronic device 1 when the electronic device 1 detects that the user acts on the control 1601 in the interface shown in FIG. 16a, the electronic device 1 displays the information shown in FIG. 16a. Interface, this interface includes two modes of personal MV and video chorus. If the electronic device 1 is currently in the single-person mode, the user 1 can select the personal MV mode. When the electronic device 1 responds to the operation of the user 1 on the personal MV1602, the interface shown in FIG. 16c is displayed. The recorded video, and the lyrics are scrolled on the video, so that the electronic device 1 can record and generate a music video of kissing this song. After the recording is finished, the electronic device 1 can be distributed to other people in the manner shown in FIG. 15.
  • a short-distance connection or a data network connection may be used between the electronic device 1 and the smart speaker in the above scenario 1, and between the electronic device 1 and the electronic device 2 and the smart speaker in the scenario 2, a short-distance connection or a data network connection may be used.
  • the electronic device 1 and the electronic device 2 and the smart speaker can establish a connection based on a short-range connection method such as a wifi hotspot, a wifi direct connection, Bluetooth, zigbee, or NFC.
  • the electronic device 1 may establish a wifi hotspot, and the electronic device 2 and the smart speaker may access the wifi hotspot, thereby establishing a short-distance connection between the electronic device 1 and the electronic device 2.
  • the electronic device 1 and the electronic device 2 and the smart speaker can also be connected to the same wifi network, that is, connected to the same router, so that the electronic device 1 and the electronic device 2 and the smart speaker can also achieve interconnection and interoperability.
  • the above scenarios require strong data transmission capabilities and fast transmission speeds, so efficient transmission methods such as wifi hotspot or wifi direct connection can be used.
  • the electronic device 1 and the electronic device 2 may simultaneously display the interface shown in FIG. 16-2, that is, the electronic device 1
  • the progress of scrolling the lyrics is the same as that of the electronic device 2, and the movement position of the collimator is the same.
  • the embodiment of the present application also provides some K song scenes.
  • the electronic device 1 may establish a network connection with the smart speaker 1702 in the car, and when the electronic device 1 receives the user, After the input of the K song request, the electronic device 1 sends the accompaniment audio file and the original vocal audio file of the song to the smart speaker 1702 according to the K song request.
  • the electronic device 1 also stores a lyrics file of the song.
  • the smart speaker 1702 plays the accompaniment audio file of the song
  • the progress of the lyrics reminder displayed in the electronic device 1 is synchronized with the playback progress of the accompaniment audio file in the smart speaker 1702.
  • the smart speaker in the foregoing embodiment may also be replaced with an electronic device such as a television, a computer, or a mobile phone.
  • electronic device 1 is a mobile phone that functions as both a smart speaker and a microphone, and collects audio information of user 1 while playing backing music.
  • Electronic device 2 is another mobile phone. The mobile phone is added to the chorus mode, and After the lyrics of the electronic device 1 are synchronized, the audio information of the user 2 is collected and sent to the electronic device 1 for playing.
  • An embodiment of the present application further provides a method for synchronizing multimedia content, which can implement synchronous playback of text information and audio files between electronic devices. As shown in FIG. 18, the method includes:
  • a second electronic device establishes a network connection with the first electronic device.
  • the first electronic device and the second electronic device may be connected at a short distance based on a communication network such as a wifi hotspot, wifi direct connection, Bluetooth, zigbee, or NFC.
  • a communication network such as a wifi hotspot, wifi direct connection, Bluetooth, zigbee, or NFC.
  • the first operation is an operation that the user 1 acts on the K song 901 control.
  • the first operation includes a voice operation or a gesture operation.
  • the gesture operation may include a touch gesture or a hover gesture.
  • Touch gestures can include, but are not limited to, clicking, double clicking, long pressing, pressing or dragging, and the like.
  • the first electronic device displays a first interface, and sends an instruction to play an audio file to the second electronic device.
  • the first interface is shown in FIG. 9b.
  • the first interface includes text information, and the text information is associated with the audio file.
  • the first electronic device adjusts the text information according to a network delay between the first electronic device and the second electronic device, so that the text information and the audio file are played synchronously.
  • the text information of the first electronic device is played synchronously with the audio file, which can mean that when the smart speaker plays the accompaniment corresponding to "me and you", the electronic device 1 advances the playback of the text information by 0.5 seconds in advance. "I kiss you goodbye.”
  • the audio file played by the second electronic device may be an instruction to play the audio file sent by the first electronic device, or may be obtained from the cloud server according to the link information in the instruction to play the audio file, for example, Obtained from the music cloud server.
  • the first electronic device can be calculated in advance in the following manner, as shown in FIG. 19.
  • Step a the first electronic device sends a clock synchronization request to the second electronic device, and the first electronic device records that the sending time of the clock synchronization request is T1;
  • Step b after the second electronic device receives the clock synchronization request, it records that the reception time of the clock synchronization request is T2;
  • Step c the second electronic device sends a clock synchronization request response to the first electronic device, and the second electronic device records that the sending time of the clock synchronization request response is T3;
  • Step d after receiving the clock synchronization request response, the first electronic device records that the reception time of the clock synchronization request response is T4;
  • Step e The second electronic device sends T2 and T3 to the first electronic device, and the first electronic device calculates the network delay and clock difference according to T1, T2, T3, and T4.
  • the network delay from the first electronic device to the second electronic device is the same as the network delay from the second electronic device to the first electronic device, and is labeled as T delay , between the first electronic device and the second electronic device.
  • the clock difference is marked as T offset , then the following equation exists:
  • the embodiment of the present application also provides a network delay calculation method, that is, the network as shown in FIG. 19 is initiated every set time (for example, 300ms). Network delay calculation. When the network delay varies greatly after multiple calculations, update to the latest calculated network delay value. For details, see Figure 20.
  • step 2001 the first electronic device calculates a network delay value according to the network delay calculation method shown in FIG. 19 for each set duration.
  • step 2002 the first electronic device saves the calculated network delay value to a network delay list.
  • step 2003 when the first electronic device determines that the number of network delay values in the network delay list reaches a preset value, for example, there are 5 delay values in the network delay list, the first electronic device proceeds to step 2004. Otherwise, go to step 2001.
  • a preset value for example, there are 5 delay values in the network delay list
  • step 2004 the first electronic device performs variance calculation on all delay values in the network delay list.
  • step 2005 when the first electronic device determines that the calculated variance is not less than a set threshold, it continues to perform step 2006, otherwise it executes step 2007.
  • step 2006 the first electronic device does not update the current delay value.
  • step 2007 the first electronic device delays the current delay to the latest calculated delay value, that is, the latest delay value.
  • the first electronic device calculates the variance every 5 delay values. If the variance is less than the preset threshold, the current delay is updated to the latest calculated delay value. If the variance is not less than the preset threshold, the delay is not updated. value.
  • Manner 1 When the second electronic device sends the playback progress information of the audio file to the first electronic device, the first electronic device may according to the network delay between the second network device and the first network device and the audio file playback progress information, The playback progress of the text information is calculated, and then the text information is adjusted according to the playback progress of the text information. It should be noted that the network delay between the second network device and the first network device may be a positive value or a negative value.
  • Method 2 When the second electronic device sends the playback progress information of the audio file to the first electronic device, the first electronic device may use the network delay, the clock difference, and the audio file according to the network delay between the second network device and the first network device.
  • the playback progress information calculates the playback progress of the text information, and then the first electronic device adjusts the text information according to the playback progress of the text information.
  • Method 3 When the second electronic device sends a scheduled playback instruction to the first electronic device, because the scheduled playback instruction includes that the second electronic device starts playing the audio file after a preset waiting time after the first time, the first electronic device may The first duration is determined according to the first moment, the network delay, and the preset waiting duration. When the first time period is determined, the first electronic device starts displaying text information after the first time period has elapsed after receiving the scheduled playback time.
  • the specific implementation process of the third method includes the following steps, as shown in FIG. 21.
  • Step 2101 The first electronic device sends an instruction to play an audio file to the second electronic device.
  • Step 2102 The second electronic device sends a scheduled playback instruction to the first electronic device.
  • the scheduled playback instruction includes a preset waiting time period Twait, and at the same time carries the current time T1.
  • the reserved play instruction instructs the second electronic device to start playing the audio file after the Twait time has elapsed after the current time T1.
  • Step 2103 The first electronic device receives the scheduled playback instruction, and records a reception time T2 of the scheduled playback instruction.
  • Step 2104 The first electronic device calculates a waiting time period Tn of the first electronic device according to T1, Twait, and T2, and a known network delay value.
  • the first electronic device starts displaying text information after a time period Tn after the current time T2.
  • step 2104 since the first electronic device and the second electronic device are playing at the same time, the following conditions are satisfied:
  • Tstart1 is the playback time of the first electronic device
  • Tstart2 is the playback time of the second electronic device
  • Twait is the waiting time of the first electronic device
  • T delay is the network delay between the first electronic device and the second electronic device.
  • Tn Twait-T delay + T offset , where T offset is a clock difference between the first electronic device and the second electronic device.
  • the embodiment of the present application also provides an implementation manner of the interactive interface shown in FIG. 22.
  • the electronic device 1 displays a countdown status control 2201 shown in FIG. 22a to FIG. 22c.
  • the countdown status control 2201 disappears at the same time, the electronic device 1 starts to scroll the lyrics and the smart speaker starts to play the accompaniment simultaneously.
  • the calculation process shown in FIG. 19 and FIG. 21 can also be performed by a third electronic device.
  • the third electronic device It may be interconnected with the first electronic device and the second electronic device.
  • the router obtains information such as T1, T2, T3, and T4 from the first electronic device, calculates the network delay and clock difference, and then determines The lyrics playback progress of the first electronic device is output, and the lyrics playback progress is sent to the first electronic device.
  • the third electronic device may be a mobile phone, a router, a washing machine, a refrigerator, a television, or the like that is interconnected with the first electronic device and the second electronic device.
  • the text information of the first electronic device is played synchronously with the audio file, which can mean that the smart speaker plays to ""
  • the first electronic device is a mobile phone and the second electronic device is a smart speaker.
  • the specific implementation of the above-mentioned K song process will be described.
  • the mobile phone side in addition to the APP control, communication module, and wifi / bluetooth module, the mobile phone side also mainly includes a sound processing module, a microphone (MIC) recording module, a synchronization module, a lyrics processing module, and a multi-phone / smart speaker management module.
  • the sound effect processing module mainly completes the process of accenting, echoing, howling, etc. after the mobile phone records the human voice
  • the MIC recording module mainly completes the sound recording and VAD processing
  • the synchronization module uses the network synchronization algorithm to The accompaniment playback and lyrics are synchronized between the devices
  • the lyrics display module adjusts and displays the lyrics according to the results obtained by the synchronization algorithm
  • the multi-phone / smart speaker management module manages the chorus and access of multiple phones and smart speakers.
  • the smart speaker side also mainly includes accompaniment processing module, real-time sound processing module, synchronization module, sound fusion module, playback module, and multi-phone management module.
  • the accompaniment processing module mainly completes the transmission and synchronization of the accompaniment;
  • the real-time sound processing module uses the delay algorithm to smooth the sound from each mobile phone, including algorithms such as active packet loss and message delay to ensure mobile phone side voice It can receive and process on the smart speaker side in real time;
  • Synchronization module use network synchronization algorithm to synchronize accompaniment playback and lyrics between multiple devices;
  • sound fusion module fusion processing of sound from various mobile phones; playback module: Mix vocals and accompaniment for playback;
  • Multi-phone management module Manage multi-phone chorus and access.
  • the embodiment of the present application further provides a schematic diagram of the interaction between the mobile phone and the smart speaker shown in FIG. 24, and illustrates the specific implementation of the single-person mode in the above embodiment.
  • Step 2401a to step 2401e The APP control module of the mobile phone issues a connection request, the communication module of the mobile phone actually initiates a connection to the smart speaker, and the mobile phone and the smart speaker respectively establish a control channel and a data channel.
  • the communication module on the mobile phone side notifies the APP control module of the mobile phone that a communication connection has been established; the communication module on the smart speaker side notifies the APP control module of the smart speaker that a communication connection has been established.
  • Step 2402 The APP control module on the mobile phone side notifies the synchronization module on the mobile phone side to start synchronization.
  • Step 2403 The synchronization module on the mobile phone side sends a synchronization request message to the synchronization module on the smart speaker side.
  • Step 2404 The synchronization module on the smart speaker side sends a synchronization response message to the synchronization module on the mobile phone side.
  • Step 2405 The synchronization module on the mobile phone side performs synchronization calculation, and calculates the delay value and the clock difference.
  • Step 2406 The synchronization module on the smart speaker side performs synchronization calculation, and the delay value and the clock difference are calculated.
  • Step 2407 The APP control module on the smart speaker side notifies the accompaniment processing module to start playing the accompaniment.
  • Step 2408 The accompaniment processing module on the smart speaker side sends a scheduled playback instruction to the synchronization module on the smart speaker side.
  • Step 2409 The synchronization module on the smart speaker side issues an instruction to schedule playback to the synchronization module on the mobile phone side.
  • Step 2410 After receiving the scheduled playback instruction, the synchronization module on the mobile phone side notifies the lyrics display module to scroll and display the lyrics.
  • Step 2411 The accompaniment processing module on the smart speaker side sends the accompaniment audio file to the playback module.
  • Step 2412 both sides of the mobile phone and the smart speaker perform lyrics processing and accompaniment processing according to the scheduled playback time.
  • Step 2413 The MIC recording module on the mobile phone side starts recording.
  • Step 2414 The MIC recording module on the mobile phone side can perform voice activity detection (Voice, Activity Detection, VAD) inspection on the sound.
  • voice activity detection Vehicle, Activity Detection, VAD
  • the MIC recording module on the mobile phone side performs sound effect processing, such as performing sound howling suppression, sound filtering, sound coordination processing, and the like on the sound itself.
  • Step 2415 After the MIC recording module on the mobile phone processes the sound data, it sends the sound data to the smart speaker.
  • Step 2416 After the smart speaker receives the sound, the real-time sound processing module processes the sound.
  • the real-time sound processing module performs repair processing on the sound; performs delay judgment on the sound, and discards or takes half-tone processing for the sound whose delay exceeds a certain interval (because human ears can recognize two sounds).
  • Step 2417 After the real-time sound processing module processes the sound, it sends the sound data to the smart speaker.
  • Step 2418 The sound fusion module on the smart speaker side mixes the accompaniment and the human voice.
  • Step 2419 The playback module on the smart speaker side finally plays the sound.
  • steps in FIG. 24 described above do not have a strict sequence, and in addition, they do not necessarily need to be performed all, and can be adjusted as needed in the actual production process.
  • the mobile phone and the smart speaker can establish a communication connection in the same space, and can also establish a cloud connection through the network in different spaces.
  • the mobile phone and the smart speaker can be played simultaneously at the scheduled time, or during the playback of the smart speaker, the mobile phone uses the above synchronization mechanism to align the lyrics in real time. For example: the smart speaker actively reports the playback progress Ta, and the mobile phone side according to the synchronization mechanism Calculate T delay and determine the current scrolling progress of the lyrics based on Ta and T delay .
  • the embodiment of the present application also provides an embodiment, which discloses that the mobile phone can control the running state of the smart speaker, so that the smart speaker pauses, resumes, and stops playing the accompaniment, as shown in FIG. 25.
  • step 2501 during the K song process, when the mobile phone detects that the user acts on the pause touch control, the mobile phone stops scrolling the lyrics.
  • Step 2502 The mobile phone sends a pause instruction to the smart speaker.
  • step 2503 the smart speaker stops playing the accompaniment after receiving the pause playback instruction.
  • the operating state of the mobile phone may also be controlled by a smart speaker, and the principle is similar to the process shown in FIG. 25, and details are not described herein again.
  • An embodiment of the present application further provides a method for synchronizing multimedia content.
  • the method is implemented by a third electronic device, corresponding to the scenario of the chorus mode in the foregoing embodiment.
  • This method can implement text between multiple electronic devices that will be added to the chorus model Messages are synchronized.
  • the method includes: Step 2601a.
  • a network connection is established between the first electronic device and the third electronic device.
  • the specific connection method is the same as that in FIG. Step 2602a:
  • the first electronic device sends an instruction to play text information to the third electronic device.
  • the instruction includes progress information of text information currently played by the first electronic device.
  • Step 2603a the third electronic device adjusts text information of the third electronic device according to the network delay and progress information between the first electronic device and the third electronic device, so that the text information of the third electronic device and the first electronic device
  • the text messages of an electronic device are played synchronously.
  • the electronic device 1 After the electronic device 2 joins the K song group, after the electronic device 1 detects that the user acts on the control of the K song, the electronic device 1 sends an instruction to the electronic device 2 to play text information. After receiving the instruction, the electronic device 2 adjusts the text information according to the network delay. Eventually, the text information displayed by the electronic device 2 is consistent with the text information displayed by the electronic device 1, as shown in FIGS. 9a and 9b.
  • the first electronic device may also adjust the third electronic device according to network delay, clock difference, and progress information. Text message.
  • the third electronic device records the first sound information of the user and sends the first electronic device to the first electronic device Sound information, the first electronic device sends the second sound information recorded by the first electronic device together with the first sound information to the second electronic device, so that the second electronic device plays the second sound information and the first sound information.
  • Method 2 the third electronic device records the first sound information of the user and sends the first sound information to the second electronic device, and the first electronic device sends the second sound information recorded by the first electronic device to the second electronic device, The second electronic device is caused to play the second sound information and the first sound information.
  • the embodiment of the present application also provides a schematic diagram of the interaction between the mobile phone and the smart speaker shown in FIG. 26b, and illustrates the specific implementation of the chorus mode in the above embodiment. Since the specific implementation of the single-player mode has been described in detail in Embodiment 1 shown in FIG. 24, this embodiment focuses on the process of adding a K song to a secondary mobile phone.
  • the main mobile phone In step 2601b, the main mobile phone generates a two-dimensional code before or during the K song.
  • the two-dimensional code may include information such as the IP address of the main phone and the IP address of the smart speaker.
  • Step 2602b the secondary mobile phone scans the two-dimensional code to obtain the IP address of the main mobile phone and the IP address of the smart speaker.
  • Step 2603b the secondary mobile phone initiates a connection to the primary mobile phone, and establishes a control channel for lyrics transmission, synchronization processing, and playback control.
  • step 2604 the secondary mobile phone initiates a connection to the smart speaker, and establishes a data channel for singing voice transmission.
  • Step 2605 The primary mobile phone sends the lyrics file to the secondary mobile phone.
  • step 2606 synchronization processing is performed between the secondary mobile phone and the primary mobile phone.
  • the synchronization processing method is similar to the synchronization processing mechanism between the primary mobile phone and the smart speaker in the foregoing embodiment, and details are not described herein again.
  • step 2607 the secondary phone starts scrolling and displaying lyrics according to the synchronization processing result, and the progress of the lyrics reminder of the secondary phone is consistent with the progress of the lyrics reminder of the primary phone.
  • the lyrics of the primary phone have scrolled to "me and you", then the secondary phone also scrolls to this position accordingly.
  • step 2608 the secondary mobile phone performs recording and sends the sound to the sound processing module of the smart speaker.
  • the processing method of the sound processing module is similar to the foregoing embodiment, and is not described repeatedly.
  • Step 2609 The sound processing module of the smart speaker sends the processed sound to the sound fusion module.
  • Step 2610 The sound fusion module of the smart speaker fuses the sound of the primary mobile phone, the sound of the secondary mobile phone, and the accompaniment.
  • Step 2611 The fused sound of the sound fusion module of the smart speaker is sent to the playback module.
  • step 2612 the playback module of the smart speaker finally plays and processes the sound of the primary mobile phone, the sound of the secondary mobile phone, and the accompaniment.
  • the embodiment of the present application also provides an embodiment.
  • This embodiment discloses that the main mobile phone can control the running status of the smart speaker and the auxiliary mobile phone, so that the smart speaker and the auxiliary mobile phone can pause, resume, and stop playing the accompaniment. For specific implementation, see FIG. 27. Show.
  • Step 2701 During the K song process, when the main mobile phone detects that the user acts on the touch control for pausing, the main mobile phone stops scrolling the lyrics.
  • Step 2702 The main mobile phone sends a control instruction to pause the K song to the smart speaker.
  • Step 2703 After receiving the control instruction, the smart speaker executes the action of pausing K song.
  • step 2704 the primary mobile phone sends a control instruction to pause the K song to the secondary mobile phone.
  • Step 2705 After receiving the control instruction, the secondary mobile phone executes the action of pausing K song.
  • the main phone when the main phone detects that the user acts on controls such as pause, resume, and stop, it will send control instructions to the smart speaker and the secondary phone. After receiving the control instructions, the smart speaker and the secondary phone will execute the corresponding pause, resume, and stop And so on.
  • the embodiment of the present application also provides an embodiment.
  • This embodiment discloses that the secondary mobile phone can control the running status of itself and the smart speaker, but cannot control the running status of the main mobile phone. For specific implementation, see FIG. 28.
  • Step 2801 During the K song process, when the secondary mobile phone detects that the user acts on the touch control for pausing, the secondary mobile phone stops scrolling the lyrics.
  • Step 2802 the secondary mobile phone sends a control instruction to pause the K song to the smart speaker.
  • Step 2803 After receiving the control instruction, the smart speaker executes the action of pausing the K song of the secondary mobile phone.
  • this embodiment of the present application is directed to scenario 1, and a possible implementation process of the scenario is systematically described with reference to FIG. 29.
  • Step 2901 The mobile phone side establishes a network connection with the smart speaker side through a router, and the mobile phone side and the smart speaker side establish a control channel and a data channel.
  • step 2902 the APP control module on the mobile phone side receives the user's touch, starts the K song process, and triggers the acquisition of music resources from the cloud.
  • Step 2903 The processor on the smart speaker side also obtains music resources from the cloud, and then notifies the playback module to play the accompaniment.
  • step 2904 the MIC recording module on the mobile phone side records the voice of the mobile phone user, and then the mobile phone sends the user's voice to the processor on the smart speaker side through the data channel.
  • step 2905 the processor on the smart speaker side processes the user's voice and sends it to the playback module for playback.
  • the embodiment of the present application utilizes the interconnection and interconnection between multiple electronic devices and is applied to the K song scene, so that the user can K song at any time, anywhere, reduce the time cost of K song, and increase the life and entertainment experience of the whole people.
  • multimedia content synchronization method provided in the embodiment of the present application can also be applied to synchronization between audio and video, for example, electronic device 1 plays MV, and smart speakers play music accompaniment.
  • the specific synchronization implementation process is consistent with the above method. , Will not repeat them here.
  • an embodiment of the present application discloses a first electronic device.
  • the first electronic device is used to implement the methods described in the foregoing method embodiments, and includes: detecting A unit 3001, a display unit 3002, a sending unit 3003, and a processing unit 3004.
  • the detection unit 3001 is configured to support the first electronic device to execute step 1802 in FIG. 18
  • the display unit 3002 is configured to support the first electronic device to execute step 1802 in FIG. 18
  • the sending unit 3003 is configured to support the electronic device to execute step 1802 in FIG.
  • the processing unit 3004 is configured to support the electronic device to perform step 1804 in FIG. 18.
  • all relevant content of each step involved in the above method embodiment can be referred to the functional description of the corresponding functional module, which will not be repeated here.
  • an embodiment of the present application discloses a first electronic device.
  • the electronic device may include a touch screen 3101, where the touch screen 3101 includes a touch-sensitive surface 3106 and Display screen 3107; one or more processors 3102; memory 3103; one or more application programs (not shown); and one or more computer programs 3104, each of which can be connected through one or more communication buses 3105.
  • the one or more computer programs 3104 are stored in the memory 3103 and configured to be executed by the one or more processors 3102.
  • the one or more computer programs 3104 include instructions, and the instructions can be used to execute 18 and the corresponding steps in the respective embodiments.
  • an embodiment of the present application discloses a third electronic device.
  • the third electronic device is used to implement the methods described in the foregoing method embodiments, and includes: connecting A unit 3201, a receiving unit 3202, and a processing unit 3203.
  • the connection unit 3201 is configured to support the third electronic device to execute step 2601a in FIG. 26a;
  • the receiving unit 3202 is configured to support the third electronic device to execute step 3202a in FIG. 26a;
  • the processing unit 3203 is configured to support the third electronic device to execute the diagram. Step 2603a in 26a.
  • all relevant content of each step involved in the above method embodiment can be referred to the functional description of the corresponding functional module, which will not be repeated here.
  • an embodiment of the present application discloses a third electronic device.
  • the electronic device may include a touch screen 3301, where the touch screen 3301 includes a touch-sensitive surface 3306 and Display screen 3307; one or more processors 3302; memory 3303; one or more application programs (not shown); and one or more computer programs 3304, each of which can be connected through one or more communication buses 2605.
  • the one or more computer programs 3304 are stored in the memory 3303 and configured to be executed by the one or more processors 3302.
  • the one or more computer programs 3304 include instructions, and the instructions may be used to execute 26a and the respective steps in the corresponding embodiment.
  • An embodiment of the present application further provides a computer storage medium.
  • the computer storage medium stores computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the electronic device is caused to execute the foregoing related method steps to implement the photo sharing method in the foregoing embodiment. .
  • the embodiment of the present application further provides a computer program product, and when the computer program product runs on a computer, the computer is caused to execute the foregoing related steps to implement the multimedia content synchronization method in the foregoing embodiment.
  • an embodiment of the present application further provides a device.
  • the device may specifically be a chip, a component, or a module.
  • the device may include a connected processor and a memory.
  • the memory is used to store a computer to execute instructions.
  • the processor may execute computer execution instructions stored in the memory, so that the chip executes the multimedia content synchronization method in the foregoing method embodiments.
  • the electronic devices, computer storage media, computer program products, or chips provided in the embodiments of the present application are used to execute the corresponding methods provided above. Therefore, for the beneficial effects that can be achieved, refer to the corresponding methods provided above. The beneficial effects in the method are not repeated here.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of modules or units is only a logical function division.
  • multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiments of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution may be embodied in the form of a software product that is stored in a storage medium.
  • the instructions include a number of instructions for causing a device (which can be a single-chip microcomputer, a chip, or the like) or a processor to execute all or part of the steps of the methods in the embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)

Abstract

La présente invention concerne un procédé de synchronisation de contenu multimédia et un dispositif électronique. Le procédé comprend les étapes suivantes : un premier dispositif électronique établit une connexion réseau avec un second dispositif électronique ; lorsque le premier dispositif électronique détecte une première opération d'un utilisateur, en réponse à la première opération, le premier dispositif électronique affiche une première interface, la première interface comprenant des informations de texte qui sont associées à un fichier audio ; et, en réponse à la première opération, le premier dispositif électronique transmet en outre une instruction de lecture du fichier audio au second dispositif électronique, et le premier dispositif électronique ajuste les informations de texte en fonction d'un retard de réseau entre le premier dispositif électronique et le second dispositif électronique, de telle sorte que les informations de texte et le fichier audio sont lus de manière synchrone. Au moyen du procédé, les paroles lues par le premier dispositif électronique peuvent être lues de manière synchrone avec l'accompagnement lu par le second dispositif électronique, ce qui permet d'obtenir une expérience d'utilisateur améliorée.
PCT/CN2018/101202 2018-08-17 2018-08-17 Procédé de synchronisation de contenu multimédia et dispositif électronique WO2020034227A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/101202 WO2020034227A1 (fr) 2018-08-17 2018-08-17 Procédé de synchronisation de contenu multimédia et dispositif électronique
CN201880072195.7A CN111345010B (zh) 2018-08-17 2018-08-17 一种多媒体内容同步方法、电子设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/101202 WO2020034227A1 (fr) 2018-08-17 2018-08-17 Procédé de synchronisation de contenu multimédia et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2020034227A1 true WO2020034227A1 (fr) 2020-02-20

Family

ID=69524555

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/101202 WO2020034227A1 (fr) 2018-08-17 2018-08-17 Procédé de synchronisation de contenu multimédia et dispositif électronique

Country Status (2)

Country Link
CN (1) CN111345010B (fr)
WO (1) WO2020034227A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294626A (zh) * 2020-01-21 2020-06-16 腾讯音乐娱乐科技(深圳)有限公司 歌词显示的方法和装置
CN112037741A (zh) * 2020-08-27 2020-12-04 深圳创维-Rgb电子有限公司 K歌方法、装置、设备及存储介质
CN112286618A (zh) * 2020-11-16 2021-01-29 Oppo广东移动通信有限公司 设备协作方法、装置、系统、电子设备和存储介质
CN112492338A (zh) * 2020-11-27 2021-03-12 腾讯音乐娱乐科技(深圳)有限公司 线上歌房实现方法及电子设备和计算机可读存储介质
CN112967705A (zh) * 2021-02-24 2021-06-15 腾讯音乐娱乐科技(深圳)有限公司 一种混音歌曲生成方法、装置、设备及存储介质
CN113094541A (zh) * 2021-04-16 2021-07-09 网易(杭州)网络有限公司 音频播放方法、电子设备及存储介质
CN113225768A (zh) * 2021-04-29 2021-08-06 北京凯视达信息技术有限公司 一种4g/5g传输网络的同步方法
CN113345478A (zh) * 2020-03-02 2021-09-03 海信视像科技股份有限公司 播放器时刻获取方法、设备、存储介质及播放器
CN114531413A (zh) * 2020-10-30 2022-05-24 华为技术有限公司 电子设备及其邮件同步方法和可读介质
CN114666441A (zh) * 2020-12-22 2022-06-24 华为技术有限公司 一种调用其他设备能力的方法、电子设备和系统
CN114827639A (zh) * 2021-01-28 2022-07-29 华为技术有限公司 多应用的分布式实现方法、可读介质及其电子设备
WO2023155583A1 (fr) * 2022-02-17 2023-08-24 华为技术有限公司 Procédé de gestion d'application inter-dispositifs, dispositif électronique et système
CN116668763A (zh) * 2022-11-10 2023-08-29 荣耀终端有限公司 录屏方法及装置
WO2023217003A1 (fr) * 2022-05-07 2023-11-16 北京字跳网络技术有限公司 Procédé et appareil de traitement audio, dispositif et support de stockage

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114546820B (zh) * 2020-11-24 2022-12-30 华为技术有限公司 一种应用程序的调试方法及电子设备
CN114827687B (zh) * 2021-01-29 2024-04-09 华为技术有限公司 通信方法、移动设备、电子设备和计算机可读存储介质
CN113596547A (zh) * 2021-07-29 2021-11-02 海信电子科技(武汉)有限公司 一种显示设备、同步播放方法及系统
CN116541589A (zh) * 2022-01-26 2023-08-04 花瓣云科技有限公司 播放记录显示方法及相关设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050076298A (ko) * 2004-01-20 2005-07-26 엘지전자 주식회사 노래 반주기의 비디오 및 오디오 동기화장치
CN101174409A (zh) * 2006-10-24 2008-05-07 诺基亚公司 提供多种歌词卡拉ok系统的系统、方法、设备
CN101237473A (zh) * 2008-02-27 2008-08-06 中兴通讯股份有限公司 动态实现歌词播放的方法及实现该方法的移动终端和设备
CN103337240A (zh) * 2013-06-24 2013-10-02 华为技术有限公司 处理语音数据的方法、终端、服务器及系统
CN103886881A (zh) * 2014-04-14 2014-06-25 福建星网视易信息系统有限公司 一种扩展点歌曲库的方法及其系统
CN105808710A (zh) * 2016-03-05 2016-07-27 上海斐讯数据通信技术有限公司 一种远程 k 歌终端、远程k 歌系统及远程k 歌方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101132222B (zh) * 2006-08-22 2011-02-16 上海贝尔阿尔卡特股份有限公司 网关设备、通信网络和同步方法
JP5454802B2 (ja) * 2011-07-29 2014-03-26 ブラザー工業株式会社 カラオケ装置
CN102982832B (zh) * 2012-11-24 2015-05-27 安徽科大讯飞信息科技股份有限公司 一种在线卡拉ok伴奏、人声与字幕的同步方法
CN103718528B (zh) * 2013-08-30 2016-09-28 华为技术有限公司 一种多终端协同播放多媒体文件的方法和相关装置及系统
CN104091423B (zh) * 2014-03-12 2019-02-12 腾讯科技(深圳)有限公司 一种信号传输方法及家庭点歌系统
CN104184894A (zh) * 2014-08-21 2014-12-03 深圳市比巴科技有限公司 一种卡拉ok的实现方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050076298A (ko) * 2004-01-20 2005-07-26 엘지전자 주식회사 노래 반주기의 비디오 및 오디오 동기화장치
CN101174409A (zh) * 2006-10-24 2008-05-07 诺基亚公司 提供多种歌词卡拉ok系统的系统、方法、设备
CN101237473A (zh) * 2008-02-27 2008-08-06 中兴通讯股份有限公司 动态实现歌词播放的方法及实现该方法的移动终端和设备
CN103337240A (zh) * 2013-06-24 2013-10-02 华为技术有限公司 处理语音数据的方法、终端、服务器及系统
CN103886881A (zh) * 2014-04-14 2014-06-25 福建星网视易信息系统有限公司 一种扩展点歌曲库的方法及其系统
CN105808710A (zh) * 2016-03-05 2016-07-27 上海斐讯数据通信技术有限公司 一种远程 k 歌终端、远程k 歌系统及远程k 歌方法

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294626A (zh) * 2020-01-21 2020-06-16 腾讯音乐娱乐科技(深圳)有限公司 歌词显示的方法和装置
CN113345478B (zh) * 2020-03-02 2022-07-05 海信视像科技股份有限公司 播放器时刻获取方法、设备、存储介质及播放器
CN113345478A (zh) * 2020-03-02 2021-09-03 海信视像科技股份有限公司 播放器时刻获取方法、设备、存储介质及播放器
CN112037741A (zh) * 2020-08-27 2020-12-04 深圳创维-Rgb电子有限公司 K歌方法、装置、设备及存储介质
CN114531413B (zh) * 2020-10-30 2023-10-10 华为技术有限公司 电子设备及其邮件同步方法和可读介质
CN114531413A (zh) * 2020-10-30 2022-05-24 华为技术有限公司 电子设备及其邮件同步方法和可读介质
CN112286618A (zh) * 2020-11-16 2021-01-29 Oppo广东移动通信有限公司 设备协作方法、装置、系统、电子设备和存储介质
CN112492338A (zh) * 2020-11-27 2021-03-12 腾讯音乐娱乐科技(深圳)有限公司 线上歌房实现方法及电子设备和计算机可读存储介质
CN112492338B (zh) * 2020-11-27 2023-10-13 腾讯音乐娱乐科技(深圳)有限公司 线上歌房实现方法及电子设备和计算机可读存储介质
CN114666441B (zh) * 2020-12-22 2024-02-09 华为技术有限公司 一种调用其他设备能力的方法、电子设备、系统和存储介质
CN114666441A (zh) * 2020-12-22 2022-06-24 华为技术有限公司 一种调用其他设备能力的方法、电子设备和系统
CN114827639A (zh) * 2021-01-28 2022-07-29 华为技术有限公司 多应用的分布式实现方法、可读介质及其电子设备
CN114827639B (zh) * 2021-01-28 2023-07-11 华为技术有限公司 多应用的分布式实现方法、可读介质及其电子设备
CN112967705B (zh) * 2021-02-24 2023-11-28 腾讯音乐娱乐科技(深圳)有限公司 一种混音歌曲生成方法、装置、设备及存储介质
CN112967705A (zh) * 2021-02-24 2021-06-15 腾讯音乐娱乐科技(深圳)有限公司 一种混音歌曲生成方法、装置、设备及存储介质
CN113094541A (zh) * 2021-04-16 2021-07-09 网易(杭州)网络有限公司 音频播放方法、电子设备及存储介质
CN113225768B (zh) * 2021-04-29 2023-02-10 北京凯视达信息技术有限公司 一种4g/5g传输网络的同步方法
CN113225768A (zh) * 2021-04-29 2021-08-06 北京凯视达信息技术有限公司 一种4g/5g传输网络的同步方法
WO2023155583A1 (fr) * 2022-02-17 2023-08-24 华为技术有限公司 Procédé de gestion d'application inter-dispositifs, dispositif électronique et système
WO2023217003A1 (fr) * 2022-05-07 2023-11-16 北京字跳网络技术有限公司 Procédé et appareil de traitement audio, dispositif et support de stockage
CN116668763A (zh) * 2022-11-10 2023-08-29 荣耀终端有限公司 录屏方法及装置
CN116668763B (zh) * 2022-11-10 2024-04-19 荣耀终端有限公司 录屏方法及装置

Also Published As

Publication number Publication date
CN111345010A (zh) 2020-06-26
CN111345010B (zh) 2021-12-28

Similar Documents

Publication Publication Date Title
WO2020034227A1 (fr) Procédé de synchronisation de contenu multimédia et dispositif électronique
CN110381197B (zh) 多对一投屏中音频数据的处理方法、装置及系统
WO2021052263A1 (fr) Procédé et dispositif d'affichage d'assistant vocal
CN113542839B (zh) 电子设备的投屏方法和电子设备
WO2020078337A1 (fr) Procédé de traduction et dispositif électronique
CN113497909B (zh) 一种设备交互的方法和电子设备
CN113691842B (zh) 一种跨设备的内容投射方法及电子设备
CN111628916B (zh) 一种智能音箱与电子设备协作的方法及电子设备
WO2019072178A1 (fr) Procédé de traitement de notification, et dispositif électronique
WO2020253754A1 (fr) Système et procédé de communication de données multimédia à terminaux multiples
WO2020224447A1 (fr) Procédé et système pour ajouter un dispositif de maison intelligente à des contacts
CN113961157B (zh) 显示交互系统、显示方法及设备
WO2021052139A1 (fr) Procédé d'entrée de geste et dispositif électronique
WO2020056684A1 (fr) Procédé et dispositif utilisant de multiples écouteurs tws connectés en mode relais pour réaliser une interprétation automatique
WO2021031865A1 (fr) Procédé et appareil d'appel
CN112543447A (zh) 基于通讯录的设备发现方法、音视频通信方法及电子设备
CN114827581A (zh) 同步时延测量方法、内容同步方法、终端设备及存储介质
WO2021104122A1 (fr) Procédé et appareil permettant de répondre à une demande d'appel et dispositif électronique
CN114115770A (zh) 显示控制的方法及相关装置
WO2020034075A1 (fr) Procédé de partage de photos et dispositif électronique
WO2021052388A1 (fr) Procédé de communication vidéo et appareil de communication vidéo
WO2023045597A1 (fr) Procédé et appareil de commande de transfert entre dispositifs de service de grand écran
WO2023093778A1 (fr) Procédé de capture de capture d'écran et appareil associé
WO2022052767A1 (fr) Procédé de commande de dispositif, dispositif électronique et système
WO2022105786A1 (fr) Procédé de commutation d'arrière-plan d'appel vidéo et premier dispositif de terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18930327

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18930327

Country of ref document: EP

Kind code of ref document: A1