WO2020253754A1 - Multi-terminal multimedia data communication method and system - Google Patents

Multi-terminal multimedia data communication method and system Download PDF

Info

Publication number
WO2020253754A1
WO2020253754A1 PCT/CN2020/096679 CN2020096679W WO2020253754A1 WO 2020253754 A1 WO2020253754 A1 WO 2020253754A1 CN 2020096679 W CN2020096679 W CN 2020096679W WO 2020253754 A1 WO2020253754 A1 WO 2020253754A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
slave
multimedia data
master
slave electronic
Prior art date
Application number
PCT/CN2020/096679
Other languages
French (fr)
Chinese (zh)
Inventor
杨枭
黎椿键
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2021566505A priority Critical patent/JP7416519B2/en
Publication of WO2020253754A1 publication Critical patent/WO2020253754A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0638Clock or time synchronisation among nodes; Internode synchronisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech

Definitions

  • the embodiments of the present application relate to the field of multimedia technology, and in particular, to a method and system for multi-terminal multimedia data communication.
  • multi-machine K song In the current multi-terminal collaborative uplink and downlink multimedia applications, taking multi-machine K song as an example, it generally includes multiple electronic devices (such as mobile phones), one of which is the host and the other electronic devices are the slaves. After the host establishes the karaoke room through the host, turn on the KTV function, and other slave computers can join the karaoke room. Homeowners can order songs online. The songs are the same as common MVs in KTV karaoke rooms, including pictures, subtitles and accompaniment. The audience can apply for the microphone through the slave machine and be approved by the host, then order songs online and start the microphone arrangement. When it is the turn to play the song ordered by the audience, the audience becomes the singer.
  • electronic devices such as mobile phones
  • the host After the host establishes the karaoke room through the host, turn on the KTV function, and other slave computers can join the karaoke room.
  • Homeowners can order songs online.
  • the songs are the same as common MV
  • the audience can adjust the accompaniment and human voice volume through the slave machine, but at this time the song control authority (song pause, song cut) still belongs to the host.
  • the song control authority (song pause, song cut) still belongs to the host.
  • the "microphone" will be delivered to different mic audiences in sequence, and the singer has always been one person, which cannot meet the real-life use scenario of multi-person KTV with multiple people karaoke at the same time.
  • the embodiments of the present application provide a multi-terminal multimedia data communication method and system, which can enable multiple terminals to collaborate to establish new applications to provide new experiences, and also enable convenient interconnection and sharing of multimedia content between devices, thereby Make full use of the advantages of each device.
  • the present invention is also beneficial to construct an ecological and application system based on multi-machine coordination.
  • an embodiment of the present application provides a multi-terminal multimedia data communication method, which is applied to multiple electronic devices, and the multiple electronic devices include a master electronic device, a first slave electronic device, and a second slave electronic device.
  • the method includes: the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively.
  • the first slave electronic device receives the first play instruction, and the first slave electronic device sends the first play instruction to the master electronic device.
  • the master electronic device plays the first multimedia data, and at the same time sends at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device and the first The slave electronic device and the second slave electronic device synchronously play the first multimedia data.
  • the first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice.
  • the master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device.
  • the master electronic device combines the first human voice, the second human voice, and the first multimedia data. Mixing is performed to generate second multimedia data and play.
  • the master electronic device can establish connections with multiple slave electronic devices to realize multimedia data interaction between the master electronic device and the multiple slave electronic devices.
  • the user can send the first play instruction to the master electronic device through the first slave electronic device.
  • the main electronic device responds to the first play instruction to play the first multimedia data.
  • the master electronic device can send at least a part of the first multimedia data to multiple slave electronic devices, and use a synchronization algorithm to monitor the data synchronization between the master electronic device and the multiple slave electronic devices in real time, so that the master electronic device The device and multiple slave electronic devices synchronously play the first multimedia data.
  • multiple slave electronic devices can simultaneously receive the human voice of their respective users, for example, the first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice.
  • Multiple slave electronic devices send the received human voices of their respective users to the master electronic device, and the master electronic device mixes the received human voices with the first multimedia data, performs anti-howling and sound mixing processing, and generates the first 2. Multimedia data and play.
  • the master electronic device plays the first multimedia data synchronously with the first slave electronic device and the second slave electronic device, specifically: the master electronic device determines that the master electronic device The first clock deviation with the first slave electronic device; the master electronic device determines the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device determines the first slave electronic device according to the first clock deviation
  • the first starting time of playing the first multimedia data is used to indicate the starting time of the first slave electronic device to play the first multimedia data
  • the master electronic device determines the first time according to the second clock deviation 2.
  • the second initial playing moment of the slave electronic device playing the first multimedia data; the second initial playing moment is used to indicate the starting moment of the second slave electronic device playing the first multimedia data.
  • the master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device to determine that the first slave electronic device plays the first multimedia data.
  • a first initial playing moment of multimedia data where the first initial playing moment is used to indicate a starting moment when the first slave electronic device plays the first multimedia data.
  • the master electronic device uses the second clock deviation between the master electronic device and the second slave electronic device to determine the second initial playback time at which the second slave electronic device plays the first multimedia data.
  • the second initial playback time is used Yu represents the start time when the second slave electronic device plays the first multimedia data.
  • the master electronic device can adjust the first multimedia data to be played by the first slave electronic device and the second slave electronic device through the start time of the master electronic device playing the first multimedia data, the first clock deviation, and the second clock deviation.
  • the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data.
  • the clock deviation can also be determined by the slave electronic device.
  • the master electronic device sends the starting time of playing the first multimedia data to the first slave electronic device and the second slave electronic device.
  • the first slave electronic device determines the first clock deviation between the master electronic device and the first slave electronic device, and determines the first slave electronic device according to the start time when the master electronic device plays the first multimedia data and the first clock deviation The start time of playing the first multimedia data.
  • the second slave electronic device determines the second clock deviation between the master electronic device and the second slave electronic device, and determines the second clock deviation according to the start time when the master electronic device plays the first multimedia data and the second clock deviation. The starting time of playing the first multimedia data from the electronic device.
  • the master electronic device determines the clock deviation, and sends its own start playback time and clock deviation to the slave electronic device, and the slave electronic device determines the start playback time.
  • the master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device; the master electronic device determines the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device plays the master electronic device
  • the start time of the first multimedia data and the first clock deviation are sent to the first slave electronic device, and the first slave electronic device determines the first time according to the start time of the master electronic device playing the first multimedia data and the first clock deviation.
  • a slave electronic device plays the start time of the first multimedia data; the master electronic device sends the start time of the master electronic device to play the first multimedia data and the second clock deviation to the second slave electronic device, the second slave The electronic device determines the start time of the second slave electronic device to play the first multimedia data according to the start time of the master electronic device playing the first multimedia data and the second clock deviation.
  • the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The first slave electronic device is in the same wireless local area network, the master electronic device displays a WiFi connection identification, and the first slave electronic device establishes a connection with the master electronic device by identifying the WiFi connection identification.
  • the master electronic device and the first slave electronic device are in the same wireless local area network, that is, the master electronic device and the first slave electronic device are in the same location, the master electronic device and the first slave electronic device establish a connection through WiFi.
  • the main electronic device displays a WiFi connection identification, and the WiFi connection identification carries WiFi networking information, including: WiFi name, WiFi password, address and port number of the main electronic device.
  • the first slave electronic device can establish a connection with the master electronic device by identifying the WiFi identifier.
  • the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The second slave electronic device is not in the same wireless local area network, the master electronic device sends networking information to the second slave electronic device, and the second slave electronic device establishes a connection with the master electronic device by analyzing the networking information.
  • the master electronic device and the second slave electronic device can be in different geographic locations. At this time, the master electronic device and the second slave electronic device are not in the same local area network. At this time, the master electronic device can send networking information to the second The slave electronic device, the second slave electronic device establishes a connection with the master electronic device by analyzing networking information.
  • the networking information may include: the IP address and port number of the main electronic device.
  • the master electronic device and the second slave electronic device can establish a connection with the data communication network through WiFi.
  • the main electronic device mixes the first human voice, the second human voice, and multimedia data to generate and play the second multimedia data.
  • the method further includes: the master electronic device sends the second multimedia data to the second slave electronic device; the master electronic device and the second slave electronic device play the second multimedia data synchronously.
  • the master electronic device since the second slave electronic device and the master electronic device are not in the same location, the master electronic device receives the first voice sent by the first slave electronic device and receives the second person sent by the second slave electronic device After the voice, the first voice, the second voice, and the first multimedia data are mixed. After generating the second multimedia data, in order to share the second multimedia data with the second slave electronic device that is not in the same location, the master electronic device sends the second multimedia data to the second slave electronic device, so that the second The second multimedia data can be played from the electronic device. In this process, the master electronic device needs to monitor the synchronization problem between the master electronic device and the second slave electronic device in real time, so that the master electronic device and the second slave electronic device synchronously play the second multimedia data.
  • At least a part of the first multimedia data includes audio, video, or lyrics of the first multimedia data.
  • At least a part of the first multimedia data may be one of audio, video, lyrics, audio and video, audio and lyrics, video and lyrics, audio and video, and lyrics.
  • the method further includes: the main electronic device receives a second play instruction input by the user.
  • the master electronic device plays the third multimedia data, and at the same time sends at least a part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device and the first The slave electronic device and the second slave electronic device synchronously play the third multimedia data.
  • the first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice.
  • the master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device.
  • the master electronic device combines the first human voice, the second human voice, and the third multimedia data Mixing is performed to generate fourth multimedia data and play.
  • the user can directly issue a play instruction to the main electronic device, and the main electronic device responds to the play instruction to play the third multimedia data.
  • the third multimedia data may be the same as the first multimedia data or different from the first multimedia data.
  • the method further includes: first receiving the first human voice from the electronic device, and simultaneously receiving the second voice from the electronic device. Two human voices, while the main electronic device receives a third human voice.
  • the master electronic device receives the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device.
  • the master electronic device converts the first human voice, the second human voice, the third human voice, and the fourth human voice.
  • the multimedia data is mixed, and a fifth multimedia file is generated and played.
  • the master electronic device can also receive the user's human voice.
  • the first electronic device, the second electronic device and the master electronic device can simultaneously receive the human voice of their respective users.
  • the first slave electronic device receives the first human voice.
  • the second slave electronic device receives the second human voice, while the master electronic device receives the third human voice.
  • the first slave electronic device sends the received first human voice to the master electronic device, and the second slave electronic device sends the received second human voice to the master electronic device.
  • the main electronic device mixes the first voice, the second voice, the third voice, and the fourth multimedia data to generate a fifth multimedia file and play it.
  • embodiments of the present application provide a multi-terminal multimedia data communication system, including a master electronic device, a first slave electronic device, and a second slave electronic device: the master electronic device is used to communicate with the first slave electronic device, The second slave electronic devices establish connections respectively.
  • the first slave electronic device is used for receiving the first play instruction, and the first electronic device is also used for sending the first play instruction to the master electronic device.
  • the master electronic device is also used to respond to the first play instruction to play the first multimedia data, and at the same time send at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device
  • the first multimedia data is played synchronously with the first slave electronic device and the second slave electronic device.
  • the first slave electronic device is also used to receive the first human voice, while the second slave electronic device is used to receive the second human voice.
  • the master electronic device is used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device.
  • the master electronic device is also used to transmit the first human voice, the second human voice, and the second human voice.
  • One multimedia data is mixed, and second multimedia data is generated and played.
  • the master electronic device can establish connections with multiple slave electronic devices to realize multimedia data interaction between the master electronic device and the multiple slave electronic devices.
  • the user can send the first play instruction to the master electronic device through the first slave electronic device.
  • the main electronic device responds to the first play instruction to play the first multimedia data.
  • the master electronic device can send at least a part of the first multimedia data to multiple slave electronic devices, and use a synchronization algorithm to monitor the data synchronization between the master electronic device and the multiple slave electronic devices in real time, so that the master electronic device The device and multiple slave electronic devices synchronously play the first multimedia data.
  • multiple slave electronic devices can simultaneously receive the human voice of their respective users, for example, the first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice.
  • Multiple slave electronic devices send the received human voices of their respective users to the master electronic device, and the master electronic device mixes the received human voices with the first multimedia data, performs anti-howling and mixing processing, and generates the first 2. Multimedia data and play.
  • the master electronic device plays the first multimedia data synchronously with the first slave electronic device and the second slave electronic device, specifically: the master electronic device is used to determine the master The first clock deviation between the electronic device and the first slave electronic device; the master electronic device is also used to determine the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device is also used to determine the second clock deviation between the master electronic device and the second slave electronic device; Deviation, which determines the first initial playback moment when the first slave electronic device plays the first multimedia data; the first initial playback moment is used to indicate the starting moment when the first slave electronic device plays the first multimedia data; the master electronic device It is also used to determine the second initial playback time of the second slave electronic device to play the first multimedia data according to the second clock deviation; the second initial playback time is used to indicate that the second slave electronic device plays the first multimedia The start time of the data.
  • the master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device to determine that the first slave electronic device plays the first multimedia data.
  • a first initial playing moment of multimedia data where the first initial playing moment is used to indicate a starting moment when the first slave electronic device plays the first multimedia data.
  • the master electronic device uses the second clock deviation between the master electronic device and the second slave electronic device to determine the second initial playback time at which the second slave electronic device plays the first multimedia data.
  • the second initial playback time is used Yu represents the start time when the second slave electronic device plays the first multimedia data.
  • the master electronic device can adjust the first multimedia data to be played by the first slave electronic device and the second slave electronic device through the start time of the master electronic device playing the first multimedia data, the first clock deviation, and the second clock deviation.
  • the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data.
  • the clock deviation can also be determined by the slave electronic device.
  • the master electronic device sends the starting time of playing the first multimedia data to the first slave electronic device and the second slave electronic device.
  • the first slave electronic device determines the first clock deviation between the master electronic device and the first slave electronic device, and determines the first slave electronic device according to the start time when the master electronic device plays the first multimedia data and the first clock deviation The start time of playing the first multimedia data.
  • the second slave electronic device determines the second clock deviation between the master electronic device and the second slave electronic device, and determines the second clock deviation according to the start time when the master electronic device plays the first multimedia data and the second clock deviation. The starting time of playing the first multimedia data from the electronic device.
  • the master electronic device determines the clock deviation, and sends its own start playback time and clock deviation to the slave electronic device, and the slave electronic device determines the start playback time.
  • the master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device; the master electronic device determines the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device plays the master electronic device
  • the start time of the first multimedia data and the first clock deviation are sent to the first slave electronic device, and the first slave electronic device determines the first time according to the start time of the master electronic device playing the first multimedia data and the first clock deviation.
  • a slave electronic device plays the start time of the first multimedia data; the master electronic device sends the start time of the master electronic device to play the first multimedia data and the second clock deviation to the second slave electronic device, the second slave The electronic device determines the start time of the second slave electronic device to play the first multimedia data according to the start time of the master electronic device playing the first multimedia data and the second clock deviation.
  • the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The first slave electronic device is in the same wireless local area network, the master electronic device is used to display a WiFi connection identification, and the first slave electronic device is used to establish a connection with the master electronic device by identifying the WiFi connection identification.
  • the master electronic device and the first slave electronic device are in the same wireless local area network, that is, the master electronic device and the first slave electronic device are in the same location, the master electronic device and the first slave electronic device establish a connection through WiFi.
  • the main electronic device displays a WiFi connection identification, and the WiFi connection identification carries WiFi networking information, including: WiFi name, WiFi password, address and port number of the main electronic device.
  • the first slave electronic device can establish a connection with the master electronic device by identifying the WiFi identifier.
  • the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The second slave electronic device is not in the same wireless local area network, the master electronic device is used to send networking information to the second slave electronic device, and the second slave electronic device is used to establish a connection with the master electronic device by analyzing the networking information.
  • the master electronic device and the second slave electronic device can be in different geographic locations. At this time, the master electronic device and the second slave electronic device are not in the same local area network. At this time, the master electronic device can send networking information to the second The slave electronic device, the second slave electronic device establishes a connection with the master electronic device by analyzing networking information.
  • the networking information may include: the IP address and port number of the main electronic device.
  • the master electronic device and the second slave electronic device can establish a connection with the data communication network through WiFi.
  • the main electronic device mixes the first human voice, the second human voice, and multimedia data to generate and play the second multimedia data.
  • the system further includes: the master electronic device is also used to send second multimedia data to the second slave electronic device; the master electronic device and the second slave electronic device play the second multimedia data synchronously.
  • the master electronic device since the second slave electronic device and the master electronic device are not in the same location, the master electronic device receives the first voice sent by the first slave electronic device and receives the second person sent by the second slave electronic device After the voice, the first voice, the second voice, and the first multimedia data are mixed. After generating the second multimedia data, in order to share the second multimedia data with the second slave electronic device that is not in the same location, the master electronic device sends the second multimedia data to the second slave electronic device, so that the second The second multimedia data can be played from the electronic device. In this process, the master electronic device needs to monitor the synchronization problem between the master electronic device and the second slave electronic device in real time, so that the master electronic device and the second slave electronic device synchronously play the second multimedia data.
  • At least a part of the first multimedia data includes audio, video, or lyrics of the first multimedia data.
  • At least a part of the first multimedia data may be one of audio, video, lyrics, audio and video, audio and lyrics, video and lyrics, audio and video, and lyrics.
  • the system further includes: the main electronic device is configured to receive a second play instruction input by the user.
  • the master electronic device is also used to respond to the second play instruction to play the third multimedia data, and at the same time send at least a part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device
  • the third multimedia data is played synchronously with the first slave electronic device and the second slave electronic device.
  • the first slave electronic device is used for receiving the first human voice
  • the second slave electronic device is used for receiving the second human voice.
  • the master electronic device is also used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device.
  • the master electronic device is also used to transmit the first human voice, the second human voice, and
  • the third multimedia data is mixed, and the fourth multimedia data is generated and played.
  • the user can directly issue a play instruction to the main electronic device, and the main electronic device responds to the play instruction to play the third multimedia data.
  • the third multimedia data may be the same as the first multimedia data or different from the first multimedia data.
  • the system further includes: the first slave electronic device is further configured to receive the first human voice, and the second slave electronic device The device is also used to receive the second human voice, and the main electronic device is also used to receive the third human voice.
  • the master electronic device is also used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device.
  • the master electronic device is also used to transmit the first human voice, the second human voice, and the second human voice.
  • the three voices and the fourth multimedia data are mixed to generate a fifth multimedia file and play it.
  • the master electronic device can also receive the user's human voice.
  • the first electronic device, the second electronic device and the master electronic device can simultaneously receive the human voice of their respective users.
  • the first slave electronic device receives the first human voice.
  • the second slave electronic device receives the second human voice, while the master electronic device receives the third human voice.
  • the first slave electronic device sends the received first human voice to the master electronic device, and the second slave electronic device sends the received second human voice to the master electronic device.
  • the main electronic device mixes the first voice, the second voice, the third voice, and the fourth multimedia data to generate a fifth multimedia file and play it.
  • an embodiment of the present application provides a computer-readable storage medium, characterized in that the computer-readable storage medium includes computer instructions, and when the computer instructions are executed on a computer, the computer executes any of the above aspects.
  • the computer-readable storage medium includes computer instructions, and when the computer instructions are executed on a computer, the computer executes any of the above aspects.
  • FIG. 1 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application.
  • FIG. 2 is a schematic diagram of the software structure of an electronic device provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of a local application of multi-terminal multimedia data communication provided by an embodiment of the application
  • FIG. 4 is a schematic diagram of a remote application of multi-terminal multimedia data communication provided by an embodiment of this application;
  • 5 is a hardware and software system architecture of a multi-terminal multimedia data communication system host provided by an embodiment of the application;
  • FIG. 6 is a hardware and software system architecture of a slave machine of a multi-terminal multimedia data communication system provided by an embodiment of the application;
  • FIG. 7 is a flowchart of data interaction between a host and a slave according to an embodiment of the application.
  • FIG. 8 is a flowchart of another host-slave data interaction provided by an embodiment of this application.
  • FIG. 9 is a flowchart of another host-slave data interaction provided by an embodiment of this application.
  • FIG. 10 is a flowchart of another group of master-slave data interaction provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a local application of multi-machine K song provided by an embodiment of the application.
  • FIG. 12A is a schematic diagram of a display interface provided by an embodiment of this application.
  • FIG. 12B is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 13 is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 14 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • 15 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 16 is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 17 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 18 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • 19 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • 20 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • 21 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 22 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 23 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • 24 is a schematic diagram of a remote application of multi-machine K song provided by an embodiment of the application.
  • FIG. 25 is a schematic diagram of a display interface provided by an embodiment of the application.
  • FIG. 26 is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 27 is a schematic diagram of another display interface provided by an embodiment of the application.
  • FIG. 28 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 29 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 30 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • FIG. 31 is a schematic diagram of another set of display interfaces provided by an embodiment of the application.
  • Figure 32 is a schematic diagram of a multi-terminal multimedia data communication system provided by an embodiment of the application
  • FIG. 1 shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (I2C) interfaces, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (PCM) interfaces, universal asynchronous transmitters and receivers (universal asynchronous transmitters).
  • I2C integrated circuit
  • I2S integrated circuit sound
  • PCM pulse code modulation
  • UART mobile industry processor interface
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the light-sensing element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the electronic device 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to exemplify the software structure of the electronic device 100.
  • FIG. 2 is a software structure block diagram of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system libraries, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text and controls that display pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or videos.
  • the embodiments of this application provide a multi-terminal multimedia data communication method and system, which can use wireless connections (such as Wi-Fi, Bluetooth, mobile communication systems, etc.) to achieve low-latency synchronization of multimedia data streams between multiple terminal devices. Point transmission.
  • wireless connections such as Wi-Fi, Bluetooth, mobile communication systems, etc.
  • the types of mobile terminal devices in the embodiments of this application are not limited, and can be mobile phones, portable computers, PDAs (personal digital assistants), tablet computers, smart TVs, smart speakers, PCs, smart homes, wireless terminal devices, communications Equipment and other electronic equipment.
  • the multi-terminal multimedia data communication method in the embodiment of the present application can be used in application scenarios such as multi-machine playback, multi-machine K song, multi-machine conversation, and multi-machine recording.
  • the above-mentioned multimedia applications based on the multi-terminal multimedia data communication method include local applications and remote applications.
  • the embodiments of this application have no special requirements on the device type and the number of networking devices, as long as it is a smart device with WiFi networking capabilities, it can be supported.
  • FIG. 3 shows a schematic diagram of a local application of multi-terminal multimedia data communication.
  • three slaves are connected to a master to realize multimedia sharing.
  • the three slaves are only for illustrative purposes.
  • 4 or more slaves can be connected to the master. .
  • multiple slaves can collect the voice information of their respective users.
  • the microphone slave 1 can collect the voice information of user 1
  • the microphone slave 2 can collect the voice information of user 2
  • the microphone slave 3 can collect the voice of user 3. information.
  • the host and the slave need to be networked first.
  • the host interface can display a QR code to provide connection parameters, or activate other short-distance communication functions such as NFC (such as WiFi, Bluetooth, etc.) Provide connection parameters to initiate networking.
  • the QR code may carry the WiFi name, WiFi password, address and port number of the main electronic device.
  • the slave machine obtains the connection parameters by scanning the QR code or NFC touch or other short-distance communication methods and joins the network to establish a connection.
  • Smart phones, smart TVs and smart speakers can all be used as hosts.
  • the host downloads karaoke accompaniment and video images from the cloud, and mixes the vocal and accompaniment sent from each slave to eliminate howling and play.
  • the host can download background music from the cloud, mix the human voice sent from each slave with the background music, eliminate howling and play it.
  • Mobile terminals such as smart phones, tablets, smart watches, wearable devices, etc. can be used as slaves.
  • the slave can be used as a microphone to pick up the user's singing or speech and send it to the host via short-distance communication such as WiFi.
  • the slave also receives the downstream data stream from the master to display the K song's picture and lyrics.
  • the slave in addition to displaying the K song and lyrics, can also play the mixed audio of the vocal and accompaniment sent by the master. This function is turned on by default when the slave is connected to the headset .
  • the host and slave can be connected to various peripherals.
  • speakers wireless or Bluetooth
  • amplifying devices are used to amplify the sound
  • headphones can also be connected.
  • FIG. 4 shows a schematic diagram of a remote application of multi-terminal multimedia data communication.
  • four slaves are connected to a master to realize multimedia sharing.
  • the master and user 1 and user 2 are at location A
  • user 3 is at location B
  • user 4 is at location C.
  • the master and slaves Located in the same or different regions.
  • the host and slave need to be networked first.
  • the host can display the QR code to provide connection parameters or provide connection parameters by activating short-range communication functions such as NFC to initiate networking, and the slave can scan the QR code or other short-range communication methods such as NFC to obtain networking information to join the network.
  • the host can connect to the host via WiFi or a data communication network (2G/3G/4G/5G).
  • the host can send the QR code to the remote slave, or send networking information to the remote slave, and the QR code or networking information carries the host's IP address and port number.
  • the remote slave machine participates in the networking by identifying the QR code or analyzing the networking information, thereby connecting to the host.
  • Smart phones, smart TVs and smart speakers can all be used as hosts.
  • the host downloads K song accompaniment and video screen or background sound from the cloud, and mixes the vocals sent from each slave with the accompaniment or background sound to eliminate howling and play it locally.
  • the master sends the mixed audio stream and video stream to the local slave through short-distance communication such as WiFi.
  • the host sends the mixed audio and video streams to remote slaves via WiFi or data communication network.
  • the mobile device can be used as a microphone to pick up the user's voice information and send it to the host.
  • the slave also receives the downstream data stream from the master, such as K song pictures, lyrics or accompaniment during conference speech.
  • the local microphone slave can be used as a microphone to pick up the user’s singing or speech and send it to the host via short-distance communication such as WiFi. On the other hand, it receives the downstream data stream from the host to display the K song's picture and lyrics.
  • the remote microphone slave can be used as a microphone to pick up the user's voice information and send it to the host through the data communication network. On the other hand, it receives the downstream data stream from the host, so as to play the audio containing the accompaniment and the sound information of itself and other users, display the picture and lyrics of K song, or make the user hear other users during calls and meetings Voice.
  • the host and slave can be connected to various peripherals.
  • speakers wireless or Bluetooth
  • amplifying devices are used to amplify the sound
  • headphones can also be connected.
  • the host needs to control the synchronization of the audio and video playback between the host and each slave.
  • the host needs to determine the starting time of each slave to play the audio and video by determining the clock deviation between the host and each slave.
  • the master determines the first initial playback time of the multimedia data played by the slave 1 by determining the first clock deviation between the master and the slave 1, and the first initial playback time is used to indicate the start time of the multimedia data played by the slave .
  • the master determines the second initial playback time of the multimedia data played by the slave 2 through the second clock deviation between the master and the slave 2, and the second initial playback time is used to indicate the start time of the slave 2 to play the multimedia data.
  • the master can adjust the initial playback time of the multimedia data played by the slave 1 and the second slave 2 through the start time of the master playing multimedia data and the first clock deviation and the second clock deviation, so that the master and the slave 1, the slave 2Play the multimedia data synchronously.
  • the clock deviation can also be determined by the slave.
  • the host sends the starting time of playing multimedia data to the slave 1 and the slave 2.
  • the slave 1 determines the first clock deviation between the master and the slave 1, and determines the start time of the multimedia data played by the slave 1 according to the start time of the master playing multimedia data and the first clock deviation.
  • the slave 2 determines the second clock deviation between the master and the slave 2, and determines the start time of the multimedia data played by the slave 2 according to the start time of the master playing multimedia data and the second clock deviation.
  • the host determines the clock deviation, and sends its own start playing time and clock deviation to the slave, and the slave determines the start playing time.
  • the host determines the first clock deviation between the host and the slave 1; the host determines the second clock deviation between the host and the slave 2; the host sends the start time and the first clock deviation of the multimedia data to the slave 1 ,
  • the slave 1 determines the start time of the multimedia data played by the slave 1 according to the start time of the master playing the multimedia data and the first clock deviation; the master sends the start time of the multimedia data played by the host and the second clock deviation to the slave 2
  • the slave 2 determines the start time of the multimedia data played by the slave 2 according to the start time of the host playing the multimedia data and the second clock deviation.
  • FIG. 5 shows a schematic diagram of a hardware and software system architecture 500 of a multi-terminal multimedia data communication system host.
  • the host system architecture 500 may include a multi-machine cooperative slave data receiving module 510, a multi-machine cooperative downlink data receiving module 520, an audio decoding module 530, an anti-howling module 540, a sound mixing module 550, a sound effect processing module 551, and a cooperative audio output control Module 560, multi-machine cooperative data sending module 570, Bluetooth protocol stack 571, USB interface 572, recording algorithm module 573, Codec574, display interface 575, display 580, cooperative APP581, Modem interface 590, WiFi protocol stack/chip 591, Bluetooth chip /Antenna 592, TypeC digital interface 593, 3.5mm headphone jack/TypeC analog interface 594, earpiece or speaker 595, MIC array 596.
  • the cooperative audio output control module 560 may include sampling rate and bit width conversion 560A, volume control 560B, channel selection 560C, channel combination 560D, and so on.
  • the collaborative APP can include multi-machine play 581A, multi-machine K song 581B, multi-machine call 581C, multi-machine recording 581D, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the hardware and software system architecture 500 of the multi-terminal multimedia data communication system host.
  • the hardware and software system architecture 500 of the multi-terminal multimedia data communication system host may include more or fewer modules or components than shown.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the multi-machine coordinated slave data receiving module 510 is used to receive the uplink data sent from each slave with a very low time delay.
  • the downlink data receiving module 520 is configured to receive K song accompaniment and screen video from the cloud with extremely low latency, and read K song accompaniment and screen video in local storage.
  • the audio decoding module 530 is used to decode the audio data stream. If it is a channel encoding format, output the PCM code stream of each channel. If it is an audio object encoding format, output audio object information and channel PCM code stream.
  • the anti-howling module 540 is used to eliminate the echo mixed into the host MIC and the slave MIC and avoid howling.
  • the anti-howling reference signal is provided by the audio decoding module.
  • the sound mixing module 550 is used for mixing the upward human voice with the accompaniment sound.
  • the sound effect processing module 551 is used for rendering effects, such as reverberation and 3D surround sound.
  • the channel PCM code stream processed by the mixing module 550 can be directly transmitted to the collaborative audio output control module 560.
  • the user can choose to perform further sound effects on the channel PCM code stream after the mixing process.
  • processing 551 the rendered channel PCM code stream is obtained and then transmitted to the cooperative audio output control module.
  • the cooperative audio output control module 560 has multiple input terminals, and can receive inputs from multiple modules, such as a multi-machine cooperative downlink data receiving module 520, a sound mixing module 550, and a sound effect processing module 551. At the same time, it has the following low-latency sub-function modules inside:
  • Sampling rate and bit width conversion 560A Convert the sampling rate so that the sampling rate and bit width meet the output requirements
  • Volume control 560B used to control the output volume
  • Channel selection 560C used to select which channel to output
  • Channel Merging 560D The user merges each channel into one channel for playback to increase the volume.
  • the cooperative audio output control module 560 can also input audio streams to the multi-machine cooperative data sending module 570, the Bluetooth protocol stack 571, the USB interface 572, the recording algorithm 573 and the Codec574.
  • the cooperative audio output control module 570 sends the delay value to the display interface 575 so as to be able to align the delay and achieve synchronization of accompaniment and lyrics.
  • the collaboration APP 581 is used to display collaboration functions on the screen and receive user input.
  • the user controls the multi-machine coordinated data transmission 570 and coordinated audio output control 560 modules through the operation interface of the cooperative APP.
  • the multi-machine cooperative data sending module 570 is docked with the Modem interface 590 and the WiFi protocol stack/chip 591, and can send audio streams and video streams to each slave with extremely low delay.
  • the Bluetooth protocol stack 571 can also be docked with the Bluetooth chip/antenna 593, and the USB interface 572 can be docked with the TypeC (digital) 593 to send audio streams and video streams to each slave.
  • Codec574 is connected with earpiece or speaker 595 and 3.5mm earphone holder/TypeC (analog) 595, and is used to play the PCM code stream that has been mixed 550 or processed by sound effect 551.
  • the host can also be used as a microphone.
  • the Codec574 is docked with the MIC array to receive user input received by the host microphone.
  • FIG. 6 shows a schematic diagram of a software and hardware system architecture 600 of a slave of a multi-terminal multimedia data communication system.
  • the multi-machine cooperative host data receiving module is used to receive the media data stream sent by the host with a very low delay
  • the slave MIC610 is connected to the Codec620 (or other interface chip), and enters the multi-machine cooperative data sending module through cooperative audio output control to send to the host.
  • the collaborative K song is taken as an example to introduce the data interaction process between the host and the slave of the multi-terminal multimedia data communication system proposed in this embodiment.
  • the host in the collaborative K song application can be a smart TV or a smart phone.
  • both the master and slave can be connected to external speakers for amplifying speakers, and the slave can also be connected to headphones. Therefore, cooperative karaoke can be divided into four modes: TV (master) + multiple local slaves, TV (master) + multiple local and remote slaves, smart phone (as master and microphone) + multiple local slaves, Smart phone (as master and microphone) + multiple local and remote slaves.
  • FIG. 7 shows a data interaction process between the host and the slave when a smart TV (TV) is used as a master and multiple smart phones are used as slaves.
  • TV smart TV
  • Step 701 The slave selects a song, and the song information is transmitted to the host via a short-distance communication method such as WiFi or Bluetooth.
  • a short-distance communication method such as WiFi or Bluetooth.
  • Step 702 Multi-machine coordinated downlink data receiving module to receive K song accompaniment and screen video from the cloud with extremely low latency, and read K song accompaniment and screen video in local storage, and send the audio data stream to the audio decoding module , Send the audio data stream that can be played directly to the cooperative audio output control module, and send the video data stream to the display interface.
  • Step 703 The host sends the K song picture and lyrics to the slave through the multi-machine collaborative data sending module, and the lyrics and video information of the song are obtained from the machine.
  • Step 704 The audio decoding module decodes the audio data stream, and the display interface sends the video data stream to the display for display.
  • the accompaniment audio processed by the audio decoding module is sent to the anti-howling module as an anti-howling reference signal, and at the same time to the mixing module and the sound effect processing module.
  • Step 705 The user sings, and the slave MIC obtains the user's voice information, and transmits the singing voice to the multi-machine coordinated slave data receiving module.
  • Step 706 The multi-machine coordinated slave data receiving module sends the MIC data stream of the slave human voice to the host through a short-distance communication method such as WiFi or Bluetooth. Since the human voice received by the slave MIC is mixed with the TV accompaniment sound, the anti-howling module is needed for anti-howling processing.
  • a short-distance communication method such as WiFi or Bluetooth.
  • Step 707 Perform anti-howling processing to eliminate the echo mixed into the MIC of the slave and avoid howling.
  • Step 708 sound mixing processing, a clean human voice is obtained after anti-howling processing, and then the clean human voice and the accompaniment sound are mixed through the mixing module.
  • the channel PCM code stream after mixing processing is sent to the cooperative audio output control module.
  • step 709 the user can render the mixed sound channel PCM code stream to generate reverberation and 3D surround sound.
  • Step 710 The cooperative audio output control module can perform sampling rate and bit width conversion, volume control, channel selection, or channel combination on the channel PCM code stream after mixing processing or the channel PCM code after sound effect processing, etc. Processing, the processed channel PCM code stream is sent to Codec. In addition, in order to keep the lyrics and accompaniment music synchronized, the "cooperative audio output control" sends the delay value to the "display interface" to determine and adjust the relative delay between the lyrics and the music.
  • Step 711 Play the singing voice sung by the user through the host receiver or speaker.
  • FIG. 8 shows the interaction process between the host and the slave when the smart TV is used as the host and the mobile device is used as the local slave and the remote slave.
  • Step 801 Both the master and the slave can select songs. Take the local slave selection of songs as an example.
  • Step 802 The host WiFi or data communication network (2G/3G/4G/5G) sends the K song screen and lyrics to the remote slave machine, and the remote slave obtains the song lyrics and video information.
  • Step 803 The multi-machine coordinated slave data receiving module sends the MIC data stream of the remote slave human voice to the master via WiFi or a data communication network (4G/5G) for anti-howling processing.
  • Step 804 The multi-machine coordinated data sending module of the host sends the processed PCM code stream to the remote slave.
  • Step 805 The remote slave handset or speaker plays the accompaniment sound transmitted from the host, and simultaneously plays the singing voices sung by the local user and the remote user.
  • a mobile phone acts as a host and a microphone.
  • Step 901 The host MIC receives the user's voice information.
  • Step 902 Send the host MIC data stream to the anti-howling processing module through the host Codec.
  • Step 903 Since the speaker of the mobile phone is relatively small, when the mobile phone is used as the host, the mobile phone can be used to connect external speakers such as external speakers to amplify the host sound and create a KTV effect.
  • a smart TV is used as the master and multiple smart phones are used as slaves as an example for introduction. It should be noted that the number of slaves can be multiple, and this embodiment takes two slaves as an example for description.
  • Figure 11 shows a local application system for multi-machine K song, including a master smart TV, microphone slave 1 and microphone slave 2. Both slaves are smart phones, and the master and two slaves are in the same Within the local area network.
  • FIG. 12A shows a graphical user interface (GUI) on the TV side, and the GUI is the network configuration interface 1201 of the TV.
  • FIG. 12B shows a graphical user interface of a mobile phone, and the GUI is a wireless local area network configuration interface 1202 of the mobile phone. As shown in Figure 12A and Figure 12B, at this time, the host TV and the two local mobile phone slaves are both connected to the same local area network.
  • GUI graphical user interface
  • FIG. 13 shows a GUI of the TV.
  • the GUI can become the TV desktop 1301.
  • the recommendation interface 1401 may include a song-song console control 1402 for entering the song-song interface.
  • a GUI as shown in FIG. 15 is displayed, and the GUI may be called the karaoke station interface 1501.
  • the karaoke platform interface 1501 may include a control 1502 for entering the mobile phone scan code karaoke interface.
  • a control 1502 for entering the mobile phone scan code karaoke interface.
  • another GUI as shown in Figure 16 is displayed.
  • This GUI can be called the mobile phone code scanning interface 1601
  • the mobile phone code scanning interface 1601 1601 can include a song two-dimensional code 1602.
  • K song can be added by scanning the song QR code 1602, so the KTV effect can be constructed through TV and mobile phones.
  • FIG. 17 shows a GUI of a mobile phone, and the GUI is a desktop 1701 of the mobile phone.
  • the same K song software is installed on the mobile phone and the TV.
  • the mobile phone detects that the user clicks the icon 1702 of the K song application on the desktop 1701, the K song application can be started.
  • FIG. 17 shows the K song application startup interface 1703.
  • the GUI may include a home page interface 1704, and the home page interface 1704 displays recommendation information.
  • the GUI may also include a menu bar 1705, and the menu bar 1705 has controls 1706 for entering my interface.
  • GUI As shown in (a) in FIG. 18 is displayed.
  • This GUI may be called my interface 1801.
  • the GUI may include controls 1802.
  • the GUI may include a control 1803 for scanning a two-dimensional code.
  • the mobile phone detects that the user has clicked on the control 1803, it can display an interface 1804 for scanning a two-dimensional code as shown in (c) in FIG. 18.
  • the interface 1804 for scanning the two-dimensional code is now aligned with the two-dimensional code for song on the TV.
  • the GUI 1901 shown in (a) in Figure 19 is displayed.
  • the GUI includes a Koke console interface 1902.
  • the GUI also includes a menu bar 1903.
  • the menu bar 1903 includes The song’s karaoke console control 1904, the controller control 1905 for the user to sing, and the control 1906 for displaying the song that has been ordered, the song karaoke console control 1904 for the song is now lit.
  • the karaoke station interface 1902 includes a disconnect control 1908, and the user can click the disconnect control 1908 to disconnect the mobile phone from the TV.
  • the karaoke station interface 1902 may also include a control 1909 for singing a song, and a control 1910 for adding a song to the song list.
  • the karaoke station interface 1902 also includes a search bar 1907.
  • the search bar 1907 can search for songs by the user’s input of the name of the song and the name of the author. For example, as shown in (a) in Figure 19, the user selects the name of the song For the song of "Song 123", various versions of the song will appear under the search bar 1907 at this time. Illustratively, the user clicks the control 1909 to sing the selected song.
  • the host TV can cache the songs in the cloud or locally. After the cache is successful, the host TV displays as shown in (b) in Figure 19, and the host TV plays the song "Song 123" selected by the user.
  • the user can click on the controller control on the mobile phone for the user to sing to enter the TV controller interface and perform the singing.
  • the mobile phone detects that the user clicks on the controller controls for the user to sing, it displays the GUI shown in Figure 20 (b), which contains the GUI that can be used for singing
  • the user can open the microphone control 2001 by clicking or sliding.
  • this is the GUI when the user is singing at this time.
  • the microphone of the mobile phone can receive the user's singing voice and play the singing voice of the user through the TV speaker.
  • the mobile phone when the mobile phone detects that the user clicks on the controller control for the user to sing, it enters the TV controller interface and displays the GUI as shown in Figure 21. At this time, the TV console interface displays both the song video screen and the Lyrics are convenient for users to use.
  • the GUI contains a control 2101 for opening and closing the barrage. When the control 2101 is turned on, if the user clicks on the emoticon barrage, the emoticon barrage will appear on both the TV and mobile video screens. When the control 2102 is closed, If the user clicks on the emoji barrage, it will only appear on the TV screen and not on the mobile phone.
  • the GUI can also include controls 2102 for starting the original singing, controls 2103 for pausing, microphone controls 2104 for singing, controls 2105 for cutting songs, controls 2106 for re-singing, and selection There are more controls 2107, and it also includes controls 2108 for sending emoji barrage.
  • the TV host displays the emoticon barrage, as shown in Figure 22 (b) Show.
  • the appearance mode of the emoji barrage may be sliding out from the top, bottom, left and right of the TV screen, or may be faded in and out, which is not limited here.
  • FIG. 23 when the mobile phone detects that the user clicks to select more controls, it displays the GUI shown in Figure 23 (b), which may include a scan Code control 2301, control 2302 for sharing, control 2303 for editing barrage.
  • the mobile phone detects that the user clicks on the operator of the control 2302 for sharing the control shown in (c) in FIG. 23 is displayed, and the K song audio can be shared to other third-party applications.
  • the control 2303 for editing barrage can realize the opening and closing of emoticon barrage, and supports users to edit text and send text barrage.
  • a smart TV is used as the master and multiple smart phones are used as slaves as an example for introduction. It should be noted that the number of slaves can be multiple, and this embodiment takes two slaves as an example for description.
  • Figure 24 shows a remote application system for multi-machine K song, including a master smart TV, local microphone slave 1, remote microphone slave 2, and remote microphone slave 3.
  • the three slaves are all smart phones.
  • the host and the local slave 1 are in the same wireless LAN, and the remote slave 2 and the remote slave 3 are not in the same place with the host.
  • the above-mentioned local application system can scan the QR code or other short-distance communication methods to join the K song.
  • This embodiment focuses on the operation flow of adding the K song from the remote slave device.
  • the QR code scanning and song ordering interface contains a sharing control 2501 in addition to the QR code of the song, through which the K song networking information can be sent to the remote slave Machine, realize remote K song.
  • the TV detects that the user controls the remote control to select the sharing control on the TV desktop, it displays the GUI as shown in FIG. 26, and at this time, the friend list 2601 of user 1 is displayed on the GUI.
  • the buddy list part is enlarged, and the display is shown in Fig. 27.
  • the buddy list contains a control 2701 for closing sharing.
  • the TV detects that the user controls the remote control to select the multi-select control 2703, it displays the GUI as shown in (b) in Figure 27.
  • control 2705 for selection appears in front of the friend's profile picture, and the GUI also displays Contains a control 2706 for completing selection and a control 2707 for canceling multiple selections.
  • the user sends the K song invitation to user 2 and user 3 in different places.
  • the QR code used to scan the code to order songs on the host is sent to the remote slave.
  • the K song invitation sent by the host to the slave by sharing can be a QR code or a link, which can be sent to remote users in the form of a message in the K song application, or it can be in the form of SMS.
  • the mobile phone can also be used as the host to initiate collaborative K song.
  • the K song application can be started, and Figure 30 (b) shows The start interface of the K song application.
  • Figure 30 (c) shows the controls for multi-person collaborative K song 3001.
  • the mobile phone detects that the user clicks on the multi-person K song control 3001, it displays the GUI as shown in Figure 31, and the mobile phone enters the karaoke table interface.
  • the bottom of the GUI karaoke table interface contains the menu bar that contains the code scanning points for song ordering.
  • the mobile phone detects that the user clicks on the control 3101, it displays a GUI as shown in (b) in Figure 31.
  • the GUI contains a two-dimensional code for a song and a sharing control. For local users, you can add K songs by scanning the QR code. For remote users, the host may have used the sharing control to share the QR code with remote users.
  • the embodiment of the application provides a multi-terminal multimedia data communication method, which can use wireless connections (such as Wi-Fi, Bluetooth, mobile communication systems, etc.) to realize low-latency simultaneous multipoint transmission of multimedia data streams between multiple terminal devices , Can make multiple terminals work together to establish new applications to provide new experiences, and can also facilitate the interconnection and sharing of multimedia content between devices, so as to make full use of the advantages of each device.
  • the multi-terminal multimedia data communication method provided by the embodiments of the present application can be used in application scenarios such as multi-machine playback, multi-machine K song, multi-machine conversation, and multi-machine recording.
  • the above-mentioned multimedia applications based on multi-terminal cooperative play, such as multi-machine playback, multi-machine K song, multi-machine conversation, multi-machine recording, etc., include two ways of local application and remote application.
  • FIG. 32 is a schematic diagram of a multimedia data communication system of a terminal provided by an embodiment of the application.
  • the system includes: a master electronic device 11, a first slave electronic device 12, and a second slave electronic device 13, wherein:
  • the master electronic device is used to establish connections with the first slave electronic device and the second slave electronic device respectively;
  • the first slave electronic device is configured to receive a first play instruction, and the first electronic device is also configured to send the first play instruction to the master electronic device;
  • the master electronic device is also used to respond to the first play instruction to play the first multimedia data, and at the same time send at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device Playing the first multimedia data synchronously with the first slave electronic device and the second slave electronic device;
  • the first slave electronic device is also used to receive the first human voice, while the second slave electronic device is used to receive the second human voice;
  • the master electronic device is also used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device.
  • the master electronic device is also used to transmit the first human voice, the second human voice, and The first multimedia data is mixed, and the second multimedia data is generated and played.
  • the present application also provides a storage medium, including: a readable storage medium and a computer program, the computer program is used to implement the multi-terminal multimedia data communication method provided by any of the foregoing embodiments.
  • All or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a readable memory.
  • the program executes the steps of the foregoing method embodiments; and the foregoing memory (storage medium) includes: read-only memory (English: read-only memory, ROM), RAM, flash memory, hard disk, and solid state hard disk , Magnetic tape, floppy disk, optical disc, and any combination thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments of the present application relate to the technical field of multimedia, and provided thereby is a multi-terminal multimedia data communication method, which may use a wireless connection to achieve the transmission of multimedia data between a plurality of terminals. The specific solution is as follows: a master electronic device establishes a connection with a first slave electronic device and a second slave electronic device, respectively. The first slave electronic device receives a playback instruction and sends the playback instruction to the master electronic device. The master electronic device responds to the playback instruction, plays back first multimedia data, and simultaneously sends at least part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device simultaneously play back the first multimedia data. The first slave electronic device receives a first human voice and sends same to the master device, while the second slave electronic device receives a second human voice and sends same to the master device. The master electronic device mixes the first human voice, the second human voice and the first multimedia data so as to generate and play back second multimedia data.

Description

一种多终端的多媒体数据通信方法和系统Multi-terminal multimedia data communication method and system
本申请要求在2019年6月19日提交中国国家知识产权局、申请号为201910533498.0的中国专利申请的优先权,发明名称为“一种多终端的多媒体数据通信方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the State Intellectual Property Office of China with the application number 201910533498.0, and the Chinese patent application with the title of “a multi-terminal multimedia data communication method and system” on June 19, 2019 Priority, the entire content of which is incorporated in this application by reference.
技术领域Technical field
本申请实施例涉及多媒体技术领域,尤其涉及一种多终端的多媒体数据通信方法和系统。The embodiments of the present application relate to the field of multimedia technology, and in particular, to a method and system for multi-terminal multimedia data communication.
背景技术Background technique
随着手机、平板、个人计算机(personal computer;PC)和无线音箱等移动终端音频设备的普及和智能化发展,家庭中通常会同时存在多种智能设备,并通过有线或无线的方式连接在同一个局域网中,使得跨设备的互动以及协同娱乐成为增强用户娱乐体验的重要发展方向。With the popularization and intelligent development of mobile terminal audio equipment such as mobile phones, tablets, personal computers (PC) and wireless speakers, there are usually multiple smart devices in the home at the same time, and they are connected to the same In a local area network, cross-device interaction and collaborative entertainment have become an important development direction for enhancing user entertainment experience.
目前的多终端协同的上下行多媒体应用中,以多机K歌为例,一般包含多个电子设备(如手机),其中一个电子设备为主机,其他电子设备为从机。房主通过主机建立K歌房后,开启KTV功能,其他从机可以加入该K歌房。房主可以在线点歌,歌曲与KTV歌房中常见MV一样,包括画面、字幕和伴奏。观众可以通过从机申请上麦并被房主通过后,进行在线点歌,并开始排麦。当轮到播放上麦观众所点的歌时,观众成为演唱者。观众可以通过从机调节伴奏与人声音量,但此时歌曲控制权限(歌曲暂停、切歌)依然归主机所有。该过程中,“话筒”会按顺序传递给不同连麦观众,演唱者一直为一个人,无法满足现实生活中多人KTV中多人同时K歌的使用场景。In the current multi-terminal collaborative uplink and downlink multimedia applications, taking multi-machine K song as an example, it generally includes multiple electronic devices (such as mobile phones), one of which is the host and the other electronic devices are the slaves. After the host establishes the karaoke room through the host, turn on the KTV function, and other slave computers can join the karaoke room. Homeowners can order songs online. The songs are the same as common MVs in KTV karaoke rooms, including pictures, subtitles and accompaniment. The audience can apply for the microphone through the slave machine and be approved by the host, then order songs online and start the microphone arrangement. When it is the turn to play the song ordered by the audience, the audience becomes the singer. The audience can adjust the accompaniment and human voice volume through the slave machine, but at this time the song control authority (song pause, song cut) still belongs to the host. During this process, the "microphone" will be delivered to different mic audiences in sequence, and the singer has always been one person, which cannot meet the real-life use scenario of multi-person KTV with multiple people karaoke at the same time.
发明内容Summary of the invention
本申请实施例提供了一种多终端的多媒体数据通信方法和系统,能够使多个终端协同起来建立新的应用从而提供新的体验,也能够使设备间方便的互联互通和共享多媒体内容,从而充分利用各台设备的优势功能。另外,本发明也有利于构建基于多机协同的生态和应用系统。The embodiments of the present application provide a multi-terminal multimedia data communication method and system, which can enable multiple terminals to collaborate to establish new applications to provide new experiences, and also enable convenient interconnection and sharing of multimedia content between devices, thereby Make full use of the advantages of each device. In addition, the present invention is also beneficial to construct an ecological and application system based on multi-machine coordination.
为达到上述目的,本申请实施例采用如下技术方案:In order to achieve the foregoing objectives, the following technical solutions are adopted in the embodiments of this application:
第一方面,本申请实施例提供一种多终端的多媒体数据通信方法,应用于多个电子设备,多个电子设备包括一个主电子设备、第一从电子设备和第二从电子设备,所述方法包括:主电子设备与第一从电子设备、第二从电子设备分别建立连接。第一从电子设备接收第一播放指令,第一从电子设备将第一播放指令发送给主电子设备。主电子设备响应第一播放指令,播放第一多媒体数据,同时将第一多媒体数据的至少一部分发送给第一从电子设备和第二从电子设备,以使得主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据。第一从电子设备接收第一人声,同时第二从电子设备接收第二人声。主电子设备接收第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声,主电子设备将第一人声、第二人声以及第一多媒体数据进行混合,生成第二多媒体数据并播放。In the first aspect, an embodiment of the present application provides a multi-terminal multimedia data communication method, which is applied to multiple electronic devices, and the multiple electronic devices include a master electronic device, a first slave electronic device, and a second slave electronic device. The method includes: the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively. The first slave electronic device receives the first play instruction, and the first slave electronic device sends the first play instruction to the master electronic device. In response to the first play instruction, the master electronic device plays the first multimedia data, and at the same time sends at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device and the first The slave electronic device and the second slave electronic device synchronously play the first multimedia data. The first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice. The master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device. The master electronic device combines the first human voice, the second human voice, and the first multimedia data. Mixing is performed to generate second multimedia data and play.
在本方案中,主电子设备可以与多个从电子设备建立连接,实现主电子设备和多个从电子设备之间的多媒体数据交互。用户可以通过第一从电子设备发送第一播放指令给主电子设备。主电子设备响应第一播放指令,播放第一多媒体数据。同时主电子设备可以将该第一多媒体数据的至少一部分发送给多个从电子设备,并使用同步算法实时监测主电子设备和多个 从电子设备之间的数据同步情况,以使得主电子设备和多个从电子设备同步播放第一多媒体数据。然后多个从电子设备可以同时接受各自用户的人声,如第一从电子设备接收第一人声,同时第二从电子设备接收第二人声。多个从电子设备将接受到的各自用户的人声发送给主电子设备,主电子设备将收到的人声与第一多媒体数据进行混合,进行抗啸叫、混音处理,生成第二多媒体数据并播放。In this solution, the master electronic device can establish connections with multiple slave electronic devices to realize multimedia data interaction between the master electronic device and the multiple slave electronic devices. The user can send the first play instruction to the master electronic device through the first slave electronic device. The main electronic device responds to the first play instruction to play the first multimedia data. At the same time, the master electronic device can send at least a part of the first multimedia data to multiple slave electronic devices, and use a synchronization algorithm to monitor the data synchronization between the master electronic device and the multiple slave electronic devices in real time, so that the master electronic device The device and multiple slave electronic devices synchronously play the first multimedia data. Then multiple slave electronic devices can simultaneously receive the human voice of their respective users, for example, the first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice. Multiple slave electronic devices send the received human voices of their respective users to the master electronic device, and the master electronic device mixes the received human voices with the first multimedia data, performs anti-howling and sound mixing processing, and generates the first 2. Multimedia data and play.
结合第一方面,在第一方面的第一实施例中,主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据,具体为:主电子设备确定主电子设备和第一从电子设备之间的第一时钟偏差;主电子设备确定主电子设备和第二从电子设备之间的第二时钟偏差;主电子设备根据第一时钟偏差,确定第一从电子设备播放第一多媒体数据的第一起始播放时刻;第一起始播放时刻用于表示第一从电子设备播放第一多媒体数据的起始时刻;主电子设备根据第二时钟偏差,确定第二从电子设备播放第一多媒体数据的第二起始播放时刻;第二起始播放时刻用于表示第二从电子设备播放第一多媒体数据的起始时刻。With reference to the first aspect, in the first embodiment of the first aspect, the master electronic device plays the first multimedia data synchronously with the first slave electronic device and the second slave electronic device, specifically: the master electronic device determines that the master electronic device The first clock deviation with the first slave electronic device; the master electronic device determines the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device determines the first slave electronic device according to the first clock deviation The first starting time of playing the first multimedia data; the first starting time of playing is used to indicate the starting time of the first slave electronic device to play the first multimedia data; the master electronic device determines the first time according to the second clock deviation 2. The second initial playing moment of the slave electronic device playing the first multimedia data; the second initial playing moment is used to indicate the starting moment of the second slave electronic device playing the first multimedia data.
在本方案中,为了使多个终端设备同步播放第一多媒体数据,主电子设备通过确定主电子设备与第一从电子设备之间的第一时钟偏差,确定第一从电子设备播放第一多媒体数据的第一起始播放时刻,该第一起始播放时刻用于表示第一从电子设备播放第一多媒体数据的起始时刻。主电子设备通过主电子设备与第二从电子设备之间的第二时钟偏差,确定第二从电子设备播放第一多媒体数据的第二起始播放时刻,该第二起始播放时刻用于表示第二从电子设备播放第一多媒体数据的起始时刻。因此主电子设备可以通过主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差、第二时钟偏差,调整第一从电子设备和第二从电子设备播放第一多媒体数据的起始播放时刻,使得主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据。In this solution, in order to enable multiple terminal devices to play the first multimedia data synchronously, the master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device to determine that the first slave electronic device plays the first multimedia data. A first initial playing moment of multimedia data, where the first initial playing moment is used to indicate a starting moment when the first slave electronic device plays the first multimedia data. The master electronic device uses the second clock deviation between the master electronic device and the second slave electronic device to determine the second initial playback time at which the second slave electronic device plays the first multimedia data. The second initial playback time is used Yu represents the start time when the second slave electronic device plays the first multimedia data. Therefore, the master electronic device can adjust the first multimedia data to be played by the first slave electronic device and the second slave electronic device through the start time of the master electronic device playing the first multimedia data, the first clock deviation, and the second clock deviation. At the initial playing time of, the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data.
需要说明的是,时钟偏差也可以由从电子设备确定。此时主电子设备将自己播放第一多媒体数据的起始时刻发送给第一从电子设备和第二从电子设备。第一从电子设备确定主电子设备和第一从电子设备之间的第一时钟偏差,并根据主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差确定第一从电子设备播放第一多媒体数据的起始时刻。同样的,第二从电子设备确定主电子设备和第二从电子设备之间的第二时钟偏差,并根据主电子设备播放第一多媒体数据的起始时刻和第二时钟偏差确定第二从电子设备播放第一多媒体数据的起始时刻。It should be noted that the clock deviation can also be determined by the slave electronic device. At this time, the master electronic device sends the starting time of playing the first multimedia data to the first slave electronic device and the second slave electronic device. The first slave electronic device determines the first clock deviation between the master electronic device and the first slave electronic device, and determines the first slave electronic device according to the start time when the master electronic device plays the first multimedia data and the first clock deviation The start time of playing the first multimedia data. Similarly, the second slave electronic device determines the second clock deviation between the master electronic device and the second slave electronic device, and determines the second clock deviation according to the start time when the master electronic device plays the first multimedia data and the second clock deviation. The starting time of playing the first multimedia data from the electronic device.
再一种可能的情况,主电子设备确定时钟偏差,并将自己的起始播放时刻和时钟偏差发送给从电子设备,由从电子设备确定起始播放时刻。主电子设备确定主电子设备和第一从电子设备之间的第一时钟偏差;主电子设备确定主电子设备和第二从电子设备之间的第二时钟偏差;主电子设备将主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差发送给第一从电子设备,第一从电子设备根据主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差确定第一从电子设备播放第一多媒体数据的起始时刻;主电子设备将主电子设备播放第一多媒体数据的起始时刻和第二时钟偏差发送给第二从电子设备,第二从电子设备根据主电子设备播放第一多媒体数据的起始时刻和第二时钟偏差确定第二从电子设备播放第一多媒体数据的起始时刻。In another possible situation, the master electronic device determines the clock deviation, and sends its own start playback time and clock deviation to the slave electronic device, and the slave electronic device determines the start playback time. The master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device; the master electronic device determines the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device plays the master electronic device The start time of the first multimedia data and the first clock deviation are sent to the first slave electronic device, and the first slave electronic device determines the first time according to the start time of the master electronic device playing the first multimedia data and the first clock deviation. A slave electronic device plays the start time of the first multimedia data; the master electronic device sends the start time of the master electronic device to play the first multimedia data and the second clock deviation to the second slave electronic device, the second slave The electronic device determines the start time of the second slave electronic device to play the first multimedia data according to the start time of the master electronic device playing the first multimedia data and the second clock deviation.
结合第一方面或第一方面的第一实施例,在第一方面的第二实施例中,主电子设备与第一从电子设备、第二从电子设备分别建立连接,包括:主电子设备与第一从电子设备处于同一无线局域网内,主电子设备显示一个WiFi连接标识,第一从电子设备通过识别WiFi连接标识与主电子设备建立连接。With reference to the first aspect or the first embodiment of the first aspect, in the second embodiment of the first aspect, the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The first slave electronic device is in the same wireless local area network, the master electronic device displays a WiFi connection identification, and the first slave electronic device establishes a connection with the master electronic device by identifying the WiFi connection identification.
在本方案中,主电子设备与第一从电子设备处于同一无线局域网内,即主电子设备与第一从电子设备处于同一地点,则主电子设备和第一从电子设备通过WiFi建立连接。此时主电子设备显示一个WiFi连接标识,该WiFi连接标识中携带有WiFi组网信息,包括:WiFi名称、WiFi密码、主电子设备的地址和端口号。第一从电子设备可以通过识别该WiFi标识与主电子设备建立连接。In this solution, the master electronic device and the first slave electronic device are in the same wireless local area network, that is, the master electronic device and the first slave electronic device are in the same location, the master electronic device and the first slave electronic device establish a connection through WiFi. At this time, the main electronic device displays a WiFi connection identification, and the WiFi connection identification carries WiFi networking information, including: WiFi name, WiFi password, address and port number of the main electronic device. The first slave electronic device can establish a connection with the master electronic device by identifying the WiFi identifier.
结合第一方面或第一方面的第一实施例,在第一方面的第三实施例中,主电子设备与第一从电子设备、第二从电子设备分别建立连接,包括:主电子设备与第二从电子设备不在同一无线局域网内,主电子设备将组网信息发送给第二从电子设备,第二从电子设备通过解析该组网信息与主电子设备建立连接。With reference to the first aspect or the first embodiment of the first aspect, in the third embodiment of the first aspect, the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The second slave electronic device is not in the same wireless local area network, the master electronic device sends networking information to the second slave electronic device, and the second slave electronic device establishes a connection with the master electronic device by analyzing the networking information.
在本方案中,主电子设备和第二从电子设备可以在不同地理位置,此时主电子设备和第二从电子设备不在同一局域网内,此时主电子设备可以将组网信息发送给第二从电子设备,第二从电子设备通过解析组网信息与主电子设备建立连接。该组网信息中可以包含:主电子设备的IP地址和端口号。主电子设备和第二从电子设备可以通过WiFi和数据通信网建立连接。In this solution, the master electronic device and the second slave electronic device can be in different geographic locations. At this time, the master electronic device and the second slave electronic device are not in the same local area network. At this time, the master electronic device can send networking information to the second The slave electronic device, the second slave electronic device establishes a connection with the master electronic device by analyzing networking information. The networking information may include: the IP address and port number of the main electronic device. The master electronic device and the second slave electronic device can establish a connection with the data communication network through WiFi.
结合第一方面的第三实施例,在第一方面的第四实施例中,主电子设备将第一人声、第二人声以及多媒体数据进行混合,生成第二多媒体数据并播放之后,所述方法还包括:主电子设备将第二多媒体数据发送给第二从电子设备;主电子设备和第二从电子设备同步播放所述第二多媒体数据。In combination with the third embodiment of the first aspect, in the fourth embodiment of the first aspect, the main electronic device mixes the first human voice, the second human voice, and multimedia data to generate and play the second multimedia data. , The method further includes: the master electronic device sends the second multimedia data to the second slave electronic device; the master electronic device and the second slave electronic device play the second multimedia data synchronously.
在本方案中,由于第二从电子设备和主电子设备不在同一地点,所以在主电子设备收到第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声后,将第一人声、第二人声以及第一多媒体数据进行混合。生成第二多媒体数据后,为了与不在相同地点的第二从电子设备共享第二多媒体数据,主电子设备将第二多媒体数据发送给第二从电子设备,以使得第二从电子设备可以播放第二多媒体数据。在该过程中,主电子设备需要实时监控主电子设备和第二从电子设备之间的同步问题,以使得主电子设备和第二从电子设备同步播放第二多媒体数据。In this solution, since the second slave electronic device and the master electronic device are not in the same location, the master electronic device receives the first voice sent by the first slave electronic device and receives the second person sent by the second slave electronic device After the voice, the first voice, the second voice, and the first multimedia data are mixed. After generating the second multimedia data, in order to share the second multimedia data with the second slave electronic device that is not in the same location, the master electronic device sends the second multimedia data to the second slave electronic device, so that the second The second multimedia data can be played from the electronic device. In this process, the master electronic device needs to monitor the synchronization problem between the master electronic device and the second slave electronic device in real time, so that the master electronic device and the second slave electronic device synchronously play the second multimedia data.
结合第一方面,在第一方面的第五实施例中,第一多媒体数据的至少一部分包括第一多媒体数据的音频、视频或歌词。With reference to the first aspect, in a fifth embodiment of the first aspect, at least a part of the first multimedia data includes audio, video, or lyrics of the first multimedia data.
在本方案中,第一多媒体数据的至少一部分可以是音频、视频、歌词、音频和视频、音频和歌词、视频和歌词、音频和视频及歌词中的一种。In this solution, at least a part of the first multimedia data may be one of audio, video, lyrics, audio and video, audio and lyrics, video and lyrics, audio and video, and lyrics.
结合第一方面,在第一方面的第六实施例中,所述方法还包括:主电子设备接收用户输入的第二播放指令。主电子设备响应第二播放指令,播放第三多媒体数据,同时将第三多媒体数据的至少一部分发送给第一从电子设备和第二从电子设备,以使得主电子设备和第一从电子设备、第二从电子设备同步播放第三多媒体数据。第一从电子设备接收第一人声,同时第二从电子设备接收第二人声。主电子设备接收第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声,主电子设备将第一人声、第二人声以及第三多媒体数据进行混合,生成第四多媒体数据并播放。With reference to the first aspect, in a sixth embodiment of the first aspect, the method further includes: the main electronic device receives a second play instruction input by the user. In response to the second play instruction, the master electronic device plays the third multimedia data, and at the same time sends at least a part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device and the first The slave electronic device and the second slave electronic device synchronously play the third multimedia data. The first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice. The master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device. The master electronic device combines the first human voice, the second human voice, and the third multimedia data Mixing is performed to generate fourth multimedia data and play.
在本方案中,用户可以直接对主电子设备下发播放指令,主电子设备响应该播放指令,播放第三多媒体数据。需要说明的是,第三多媒体数据可以与第一多媒体数据相同,也可以与第一多媒体数据不同。In this solution, the user can directly issue a play instruction to the main electronic device, and the main electronic device responds to the play instruction to play the third multimedia data. It should be noted that the third multimedia data may be the same as the first multimedia data or different from the first multimedia data.
结合第一方面或第一方面的第六实施例,在第一方面的第七实施例中,所述方法还包括:第一从电子设备接收第一人声,同时第二从电子设备接收第二人声,同时主电子设备接收第 三人声。主电子设备接收第一从电子设备发送的第一人声、接收第二从电子设备发送的第二人声,主电子设备将第一人声、第二人声、第三人声以及第四多媒体数据进行混合,生成第五多媒体文件并播放。With reference to the first aspect or the sixth embodiment of the first aspect, in a seventh embodiment of the first aspect, the method further includes: first receiving the first human voice from the electronic device, and simultaneously receiving the second voice from the electronic device. Two human voices, while the main electronic device receives a third human voice. The master electronic device receives the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device. The master electronic device converts the first human voice, the second human voice, the third human voice, and the fourth human voice. The multimedia data is mixed, and a fifth multimedia file is generated and played.
在本申请方案中,主电子设备也可以接收用户人声,此时第一电子设备、第二电子设备和主电子设备可以同时接收各自用户的人声,如第一从电子设备接收第一人声,同时第二从电子设备接收第二人声,同时主电子设备接收第三人声。第一从电子设备将接收到的第一人声发送给主电子设备,第二从电子设备将接收第二人声发送给主电子设备。主电子设备将第一人声、第二人声、第三人声以及第四多媒体数据进行混合,生成第五多媒体文件并播放。In the solution of the present application, the master electronic device can also receive the user's human voice. At this time, the first electronic device, the second electronic device and the master electronic device can simultaneously receive the human voice of their respective users. For example, the first slave electronic device receives the first human voice. At the same time, the second slave electronic device receives the second human voice, while the master electronic device receives the third human voice. The first slave electronic device sends the received first human voice to the master electronic device, and the second slave electronic device sends the received second human voice to the master electronic device. The main electronic device mixes the first voice, the second voice, the third voice, and the fourth multimedia data to generate a fifth multimedia file and play it.
第二方面,本申请实施例提供了一种多终端的多媒体数据通信系统,包括一个主电子设备、第一从电子设备和第二从电子设备:主电子设备用于与第一从电子设备、第二从电子设备分别建立连接。第一从电子设备用于接收第一播放指令,第一电子设备还用于将第一播放指令发送给主电子设备。主电子设备还用于响应第一播放指令,播放第一多媒体数据,同时将第一多媒体数据的至少一部分发送给第一从电子设备、第二从电子设备,以使得主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据。第一从电子设备还用于接收第一人声,同时第二从电子设备用于接收第二人声。主电子设备用于接收第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声,主电子设备还用于将第一人声、第二人声以及第一多媒体数据进行混合,生成第二多媒体数据并播放。In the second aspect, embodiments of the present application provide a multi-terminal multimedia data communication system, including a master electronic device, a first slave electronic device, and a second slave electronic device: the master electronic device is used to communicate with the first slave electronic device, The second slave electronic devices establish connections respectively. The first slave electronic device is used for receiving the first play instruction, and the first electronic device is also used for sending the first play instruction to the master electronic device. The master electronic device is also used to respond to the first play instruction to play the first multimedia data, and at the same time send at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device The first multimedia data is played synchronously with the first slave electronic device and the second slave electronic device. The first slave electronic device is also used to receive the first human voice, while the second slave electronic device is used to receive the second human voice. The master electronic device is used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device. The master electronic device is also used to transmit the first human voice, the second human voice, and the second human voice. One multimedia data is mixed, and second multimedia data is generated and played.
在本方案中,主电子设备可以与多个从电子设备建立连接,实现主电子设备和多个从电子设备之间的多媒体数据交互。用户可以通过第一从电子设备发送第一播放指令给主电子设备。主电子设备响应第一播放指令,播放第一多媒体数据。同时主电子设备可以将该第一多媒体数据的至少一部分发送给多个从电子设备,并使用同步算法实时监测主电子设备和多个从电子设备之间的数据同步情况,以使得主电子设备和多个从电子设备同步播放第一多媒体数据。然后多个从电子设备可以同时接受各自用户的人声,如第一从电子设备接收第一人声,同时第二从电子设备接收第二人声。多个从电子设备将接受到的各自用户的人声发送给主电子设备,主电子设备将收到的人声与第一多媒体数据进行混合,进行抗啸叫、混音处理,生成第二多媒体数据并播放。In this solution, the master electronic device can establish connections with multiple slave electronic devices to realize multimedia data interaction between the master electronic device and the multiple slave electronic devices. The user can send the first play instruction to the master electronic device through the first slave electronic device. The main electronic device responds to the first play instruction to play the first multimedia data. At the same time, the master electronic device can send at least a part of the first multimedia data to multiple slave electronic devices, and use a synchronization algorithm to monitor the data synchronization between the master electronic device and the multiple slave electronic devices in real time, so that the master electronic device The device and multiple slave electronic devices synchronously play the first multimedia data. Then multiple slave electronic devices can simultaneously receive the human voice of their respective users, for example, the first slave electronic device receives the first human voice, while the second slave electronic device receives the second human voice. Multiple slave electronic devices send the received human voices of their respective users to the master electronic device, and the master electronic device mixes the received human voices with the first multimedia data, performs anti-howling and mixing processing, and generates the first 2. Multimedia data and play.
结合第二方面,在第二方面的第一实施例中,主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据,具体为:主电子设备用于确定主电子设备和第一从电子设备之间的第一时钟偏差;主电子设备还用于确定主电子设备和第二从电子设备之间的第二时钟偏差;主电子设备还用于根据第一时钟偏差,确定第一从电子设备播放第一多媒体数据的第一起始播放时刻;第一起始播放时刻用于表示第一从电子设备播放第一多媒体数据的起始时刻;主电子设备还用于根据第二时钟偏差,确定第二从电子设备播放第一多媒体数据的第二起始播放时刻;第二起始播放时刻用于表示第二从电子设备播放第一多媒体数据的起始时刻。With reference to the second aspect, in the first embodiment of the second aspect, the master electronic device plays the first multimedia data synchronously with the first slave electronic device and the second slave electronic device, specifically: the master electronic device is used to determine the master The first clock deviation between the electronic device and the first slave electronic device; the master electronic device is also used to determine the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device is also used to determine the second clock deviation between the master electronic device and the second slave electronic device; Deviation, which determines the first initial playback moment when the first slave electronic device plays the first multimedia data; the first initial playback moment is used to indicate the starting moment when the first slave electronic device plays the first multimedia data; the master electronic device It is also used to determine the second initial playback time of the second slave electronic device to play the first multimedia data according to the second clock deviation; the second initial playback time is used to indicate that the second slave electronic device plays the first multimedia The start time of the data.
在本方案中,为了使多个终端设备同步播放第一多媒体数据,主电子设备通过确定主电子设备与第一从电子设备之间的第一时钟偏差,确定第一从电子设备播放第一多媒体数据的第一起始播放时刻,该第一起始播放时刻用于表示第一从电子设备播放第一多媒体数据的起始时刻。主电子设备通过主电子设备与第二从电子设备之间的第二时钟偏差,确定第二从电子设备播放第一多媒体数据的第二起始播放时刻,该第二起始播放时刻用于表示第二从电子设备播放第一多媒体数据的起始时刻。因此主电子设备可以通过主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差、第二时钟偏差,调整第一 从电子设备和第二从电子设备播放第一多媒体数据的起始播放时刻,使得主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据。In this solution, in order to enable multiple terminal devices to play the first multimedia data synchronously, the master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device to determine that the first slave electronic device plays the first multimedia data. A first initial playing moment of multimedia data, where the first initial playing moment is used to indicate a starting moment when the first slave electronic device plays the first multimedia data. The master electronic device uses the second clock deviation between the master electronic device and the second slave electronic device to determine the second initial playback time at which the second slave electronic device plays the first multimedia data. The second initial playback time is used Yu represents the start time when the second slave electronic device plays the first multimedia data. Therefore, the master electronic device can adjust the first multimedia data to be played by the first slave electronic device and the second slave electronic device through the start time of the master electronic device playing the first multimedia data, the first clock deviation, and the second clock deviation. At the initial playing time of, the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data.
需要说明的是,时钟偏差也可以由从电子设备确定。此时主电子设备将自己播放第一多媒体数据的起始时刻发送给第一从电子设备和第二从电子设备。第一从电子设备确定主电子设备和第一从电子设备之间的第一时钟偏差,并根据主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差确定第一从电子设备播放第一多媒体数据的起始时刻。同样的,第二从电子设备确定主电子设备和第二从电子设备之间的第二时钟偏差,并根据主电子设备播放第一多媒体数据的起始时刻和第二时钟偏差确定第二从电子设备播放第一多媒体数据的起始时刻。It should be noted that the clock deviation can also be determined by the slave electronic device. At this time, the master electronic device sends the starting time of playing the first multimedia data to the first slave electronic device and the second slave electronic device. The first slave electronic device determines the first clock deviation between the master electronic device and the first slave electronic device, and determines the first slave electronic device according to the start time when the master electronic device plays the first multimedia data and the first clock deviation The start time of playing the first multimedia data. Similarly, the second slave electronic device determines the second clock deviation between the master electronic device and the second slave electronic device, and determines the second clock deviation according to the start time when the master electronic device plays the first multimedia data and the second clock deviation. The starting time of playing the first multimedia data from the electronic device.
再一种可能的情况,主电子设备确定时钟偏差,并将自己的起始播放时刻和时钟偏差发送给从电子设备,由从电子设备确定起始播放时刻。主电子设备确定主电子设备和第一从电子设备之间的第一时钟偏差;主电子设备确定主电子设备和第二从电子设备之间的第二时钟偏差;主电子设备将主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差发送给第一从电子设备,第一从电子设备根据主电子设备播放第一多媒体数据的起始时刻和第一时钟偏差确定第一从电子设备播放第一多媒体数据的起始时刻;主电子设备将主电子设备播放第一多媒体数据的起始时刻和第二时钟偏差发送给第二从电子设备,第二从电子设备根据主电子设备播放第一多媒体数据的起始时刻和第二时钟偏差确定第二从电子设备播放第一多媒体数据的起始时刻。In another possible situation, the master electronic device determines the clock deviation, and sends its own start playback time and clock deviation to the slave electronic device, and the slave electronic device determines the start playback time. The master electronic device determines the first clock deviation between the master electronic device and the first slave electronic device; the master electronic device determines the second clock deviation between the master electronic device and the second slave electronic device; the master electronic device plays the master electronic device The start time of the first multimedia data and the first clock deviation are sent to the first slave electronic device, and the first slave electronic device determines the first time according to the start time of the master electronic device playing the first multimedia data and the first clock deviation. A slave electronic device plays the start time of the first multimedia data; the master electronic device sends the start time of the master electronic device to play the first multimedia data and the second clock deviation to the second slave electronic device, the second slave The electronic device determines the start time of the second slave electronic device to play the first multimedia data according to the start time of the master electronic device playing the first multimedia data and the second clock deviation.
结合第二方面或第二方面的第一实施例,在第二方面的第二实施例中,主电子设备与第一从电子设备、第二从电子设备分别建立连接,包括:主电子设备与第一从电子设备处于同一无线局域网内,主电子设备用于显示一个WiFi连接标识,第一从电子设备用于通过识别WiFi连接标识与主电子设备建立连接。With reference to the second aspect or the first embodiment of the second aspect, in the second embodiment of the second aspect, the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The first slave electronic device is in the same wireless local area network, the master electronic device is used to display a WiFi connection identification, and the first slave electronic device is used to establish a connection with the master electronic device by identifying the WiFi connection identification.
在本方案中,主电子设备与第一从电子设备处于同一无线局域网内,即主电子设备与第一从电子设备处于同一地点,则主电子设备和第一从电子设备通过WiFi建立连接。此时主电子设备显示一个WiFi连接标识,该WiFi连接标识中携带有WiFi组网信息,包括:WiFi名称、WiFi密码、主电子设备的地址和端口号。第一从电子设备可以通过识别该WiFi标识与主电子设备建立连接。In this solution, the master electronic device and the first slave electronic device are in the same wireless local area network, that is, the master electronic device and the first slave electronic device are in the same location, the master electronic device and the first slave electronic device establish a connection through WiFi. At this time, the main electronic device displays a WiFi connection identification, and the WiFi connection identification carries WiFi networking information, including: WiFi name, WiFi password, address and port number of the main electronic device. The first slave electronic device can establish a connection with the master electronic device by identifying the WiFi identifier.
结合第二方面或第二方面的第一实施例,在第二方面的第三实施例中,主电子设备与第一从电子设备、第二从电子设备分别建立连接,包括:主电子设备与第二从电子设备不在同一无线局域网内,主电子设备用于将组网信息发送给第二从电子设备,第二从电子设备用于通过解析该组网信息与主电子设备建立连接。With reference to the second aspect or the first embodiment of the second aspect, in the third embodiment of the second aspect, the master electronic device establishes connections with the first slave electronic device and the second slave electronic device respectively, including: the master electronic device and The second slave electronic device is not in the same wireless local area network, the master electronic device is used to send networking information to the second slave electronic device, and the second slave electronic device is used to establish a connection with the master electronic device by analyzing the networking information.
在本方案中,主电子设备和第二从电子设备可以在不同地理位置,此时主电子设备和第二从电子设备不在同一局域网内,此时主电子设备可以将组网信息发送给第二从电子设备,第二从电子设备通过解析组网信息与主电子设备建立连接。该组网信息中可以包含:主电子设备的IP地址和端口号。主电子设备和第二从电子设备可以通过WiFi和数据通信网建立连接。In this solution, the master electronic device and the second slave electronic device can be in different geographic locations. At this time, the master electronic device and the second slave electronic device are not in the same local area network. At this time, the master electronic device can send networking information to the second The slave electronic device, the second slave electronic device establishes a connection with the master electronic device by analyzing networking information. The networking information may include: the IP address and port number of the main electronic device. The master electronic device and the second slave electronic device can establish a connection with the data communication network through WiFi.
结合第二方面的第三实施例,在第二方面的第四实施例中,主电子设备将第一人声、第二人声以及多媒体数据进行混合,生成第二多媒体数据并播放之后,所述系统还包括:主电子设备还用于将第二多媒体数据发送给第二从电子设备;主电子设备和第二从电子设备同步播放所述第二多媒体数据。With reference to the third embodiment of the second aspect, in the fourth embodiment of the second aspect, the main electronic device mixes the first human voice, the second human voice, and multimedia data to generate and play the second multimedia data. The system further includes: the master electronic device is also used to send second multimedia data to the second slave electronic device; the master electronic device and the second slave electronic device play the second multimedia data synchronously.
在本方案中,由于第二从电子设备和主电子设备不在同一地点,所以在主电子设备收到 第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声后,将第一人声、第二人声以及第一多媒体数据进行混合。生成第二多媒体数据后,为了与不在相同地点的第二从电子设备共享第二多媒体数据,主电子设备将第二多媒体数据发送给第二从电子设备,以使得第二从电子设备可以播放第二多媒体数据。在该过程中,主电子设备需要实时监控主电子设备和第二从电子设备之间的同步问题,以使得主电子设备和第二从电子设备同步播放第二多媒体数据。In this solution, since the second slave electronic device and the master electronic device are not in the same location, the master electronic device receives the first voice sent by the first slave electronic device and receives the second person sent by the second slave electronic device After the voice, the first voice, the second voice, and the first multimedia data are mixed. After generating the second multimedia data, in order to share the second multimedia data with the second slave electronic device that is not in the same location, the master electronic device sends the second multimedia data to the second slave electronic device, so that the second The second multimedia data can be played from the electronic device. In this process, the master electronic device needs to monitor the synchronization problem between the master electronic device and the second slave electronic device in real time, so that the master electronic device and the second slave electronic device synchronously play the second multimedia data.
结合第二方面,在第二方面的第五实施例中,第一多媒体数据的至少一部分包括第一多媒体数据的音频、视频或歌词。With reference to the second aspect, in a fifth embodiment of the second aspect, at least a part of the first multimedia data includes audio, video, or lyrics of the first multimedia data.
在本方案中,第一多媒体数据的至少一部分可以是音频、视频、歌词、音频和视频、音频和歌词、视频和歌词、音频和视频及歌词中的一种。In this solution, at least a part of the first multimedia data may be one of audio, video, lyrics, audio and video, audio and lyrics, video and lyrics, audio and video, and lyrics.
结合第二方面,在第二方面的第六实施例中,所述系统还包括:主电子设备用于接收用户输入的第二播放指令。主电子设备还用于响应第二播放指令,播放第三多媒体数据,同时将第三多媒体数据的至少一部分发送给第一从电子设备和第二从电子设备,以使得主电子设备和第一从电子设备、第二从电子设备同步播放第三多媒体数据。第一从电子设备用于接收第一人声,同时第二从电子设备用于接收第二人声。主电子设备还用于接收第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声,主电子设备还用于将第一人声、第二人声以及第三多媒体数据进行混合,生成第四多媒体数据并播放。With reference to the second aspect, in a sixth embodiment of the second aspect, the system further includes: the main electronic device is configured to receive a second play instruction input by the user. The master electronic device is also used to respond to the second play instruction to play the third multimedia data, and at the same time send at least a part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device The third multimedia data is played synchronously with the first slave electronic device and the second slave electronic device. The first slave electronic device is used for receiving the first human voice, and the second slave electronic device is used for receiving the second human voice. The master electronic device is also used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device. The master electronic device is also used to transmit the first human voice, the second human voice, and The third multimedia data is mixed, and the fourth multimedia data is generated and played.
在本方案中,用户可以直接对主电子设备下发播放指令,主电子设备响应该播放指令,播放第三多媒体数据。需要说明的是,第三多媒体数据可以与第一多媒体数据相同,也可以与第一多媒体数据不同。In this solution, the user can directly issue a play instruction to the main electronic device, and the main electronic device responds to the play instruction to play the third multimedia data. It should be noted that the third multimedia data may be the same as the first multimedia data or different from the first multimedia data.
结合第二方面或第二方面的第六实施例,在第二方面的第七实施例中,所述系统还包括:第一从电子设备还用于接收第一人声,同时第二从电子设备还用于接收第二人声,同时主电子设备还用于接收第三人声。主电子设备还用于接收第一从电子设备发送的第一人声、接收第二从电子设备发送的第二人声,主电子设备还用于将第一人声、第二人声、第三人声以及第四多媒体数据进行混合,生成第五多媒体文件并播放。With reference to the second aspect or the sixth embodiment of the second aspect, in a seventh embodiment of the second aspect, the system further includes: the first slave electronic device is further configured to receive the first human voice, and the second slave electronic device The device is also used to receive the second human voice, and the main electronic device is also used to receive the third human voice. The master electronic device is also used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device. The master electronic device is also used to transmit the first human voice, the second human voice, and the second human voice. The three voices and the fourth multimedia data are mixed to generate a fifth multimedia file and play it.
在本申请方案中,主电子设备也可以接收用户人声,此时第一电子设备、第二电子设备和主电子设备可以同时接收各自用户的人声,如第一从电子设备接收第一人声,同时第二从电子设备接收第二人声,同时主电子设备接收第三人声。第一从电子设备将接收到的第一人声发送给主电子设备,第二从电子设备将接收第二人声发送给主电子设备。主电子设备将第一人声、第二人声、第三人声以及第四多媒体数据进行混合,生成第五多媒体文件并播放。In the solution of the present application, the master electronic device can also receive the user's human voice. At this time, the first electronic device, the second electronic device and the master electronic device can simultaneously receive the human voice of their respective users. For example, the first slave electronic device receives the first human voice. At the same time, the second slave electronic device receives the second human voice, while the master electronic device receives the third human voice. The first slave electronic device sends the received first human voice to the master electronic device, and the second slave electronic device sends the received second human voice to the master electronic device. The main electronic device mixes the first voice, the second voice, the third voice, and the fourth multimedia data to generate a fifth multimedia file and play it.
另一方面,本申请实施例提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行上述方面任一项可能的实现中的多终端多媒体数据通信方法。On the other hand, an embodiment of the present application provides a computer-readable storage medium, characterized in that the computer-readable storage medium includes computer instructions, and when the computer instructions are executed on a computer, the computer executes any of the above aspects. A possible implementation of a multi-terminal multimedia data communication method.
附图说明Description of the drawings
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings used in the description of the embodiments. Obviously, the drawings in the following description are some embodiments of the present invention. For those of ordinary skill in the art, without creative work, other drawings can be obtained based on these drawings.
图1为本申请实施例提供的一种电子设备的硬件结构示意图;FIG. 1 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application;
图2为本申请实施例提供的一种电子设备的软件结构示意图;2 is a schematic diagram of the software structure of an electronic device provided by an embodiment of the application;
图3为本申请实施例提供的一种多终端的多媒体数据通信的本地应用的示意图;3 is a schematic diagram of a local application of multi-terminal multimedia data communication provided by an embodiment of the application;
图4为本申请实施例提供的一种多终端的多媒体数据通信的异地应用的示意图;4 is a schematic diagram of a remote application of multi-terminal multimedia data communication provided by an embodiment of this application;
图5为本申请实施例提供的一种多终端的多媒体数据通信系统主机的软硬件系统架构;5 is a hardware and software system architecture of a multi-terminal multimedia data communication system host provided by an embodiment of the application;
图6为本申请实施例提供的一种多终端的多媒体数据通信系统从机的软硬件系统架构;FIG. 6 is a hardware and software system architecture of a slave machine of a multi-terminal multimedia data communication system provided by an embodiment of the application;
图7为本申请实施例提供的一种主机从机数据交互流程图;FIG. 7 is a flowchart of data interaction between a host and a slave according to an embodiment of the application;
图8为本申请实施例提供的另一种主机从机数据交互流程图;FIG. 8 is a flowchart of another host-slave data interaction provided by an embodiment of this application;
图9为本申请实施例提供的另一种主机从机数据交互流程图;FIG. 9 is a flowchart of another host-slave data interaction provided by an embodiment of this application;
图10为本申请实施例提供的另一组主机从机数据交互流程图;FIG. 10 is a flowchart of another group of master-slave data interaction provided by an embodiment of this application;
图11为本申请实施例提供的一种多机K歌的本地应用的示意图;FIG. 11 is a schematic diagram of a local application of multi-machine K song provided by an embodiment of the application;
图12A为本申请实施例提供的一种显示界面示意图;FIG. 12A is a schematic diagram of a display interface provided by an embodiment of this application;
图12B为本申请实施例提供的另一种显示界面示意图;FIG. 12B is a schematic diagram of another display interface provided by an embodiment of the application;
图13为本申请实施例提供的另一种显示界面示意图;FIG. 13 is a schematic diagram of another display interface provided by an embodiment of the application;
图14为本申请实施例提供的另一组显示界面示意图;FIG. 14 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图15为本申请实施例提供的另一组显示界面示意图;15 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图16为本申请实施例提供的另一种显示界面示意图;FIG. 16 is a schematic diagram of another display interface provided by an embodiment of the application;
图17为本申请实施例提供的另一组显示界面示意图;FIG. 17 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图18为本申请实施例提供的另一组显示界面示意图;18 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图19为本申请实施例提供的另一组显示界面示意图;19 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图20为本申请实施例提供的另一组显示界面示意图;20 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图21为本申请实施例提供的另一组显示界面示意图;21 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图22为本申请实施例提供的另一组显示界面示意图;FIG. 22 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图23为本申请实施例提供的另一组显示界面示意图;FIG. 23 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图24为本申请实施例提供的一种多机K歌的异地应用的示意图;24 is a schematic diagram of a remote application of multi-machine K song provided by an embodiment of the application;
图25为本申请实施例提供的一种显示界面示意图;FIG. 25 is a schematic diagram of a display interface provided by an embodiment of the application;
图26为本申请实施例提供的另一种显示界面示意图;FIG. 26 is a schematic diagram of another display interface provided by an embodiment of the application;
图27为本申请实施例提供的另一种显示界面示意图;FIG. 27 is a schematic diagram of another display interface provided by an embodiment of the application;
图28为本申请实施例提供的另一组显示界面示意图;FIG. 28 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图29为本申请实施例提供的另一组显示界面示意图;FIG. 29 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图30为本申请实施例提供的另一组显示界面示意图;FIG. 30 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图31为本申请实施例提供的另一组显示界面示意图;FIG. 31 is a schematic diagram of another set of display interfaces provided by an embodiment of the application;
图32为本申请实施例提供的一种多终端的多媒体数据通信系统的示意图Figure 32 is a schematic diagram of a multi-terminal multimedia data communication system provided by an embodiment of the application
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of the embodiments of the present invention, not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
图1示出了电子设备100的结构示意图。FIG. 1 shows a schematic structural diagram of an electronic device 100.
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口 195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2. , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc. Among them, the different processing units may be independent devices or integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 to store instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 110 may include one or more interfaces. Interfaces may include integrated circuit (I2C) interfaces, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (PCM) interfaces, universal asynchronous transmitters and receivers (universal asynchronous transmitters). receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces. For example, the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160 中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, the UART interface is generally used to connect the processor 110 and the wireless communication module 160. For example, the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function. In some embodiments, the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices. The MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100. The processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured through software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on. GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on. The USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic description, and does not constitute a structural limitation of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is used to receive charging input from the charger. Among them, the charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive the charging input of the wired charger through the USB interface 130. In some embodiments of wireless charging, the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160. The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。The antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna can be used in combination with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc. The mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110. In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号 调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194. In some embodiments, the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110. The wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, etc. The display screen 194 includes a display panel. The display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode). AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc. In some embodiments, the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye. ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感 光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。The camera 193 is used to capture still images or videos. The object generates an optical image through the lens and projects it to the photosensitive element. The light-sensing element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. ISP outputs digital image signals to DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, for example, the transfer mode between human brain neurons, it can quickly process input information and can continuously learn by itself. The NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。The internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions. The internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function. The data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc. The processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal. The audio module 170 can also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。The speaker 170A, also called a "speaker", is used to convert audio electrical signals into sound signals. The electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。The receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone", "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used to connect wired earphones. The earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be provided on the display screen 194. There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors and so on. The capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B may be used to determine the movement posture of the electronic device 100. In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y, and z axes) can be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip holster. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。Distance sensor 180F, used to measure distance. The electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。The proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100. The electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense the brightness of the ambient light. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。Touch sensor 180K, also called "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”. The touch sensor 180K is used to detect touch operations acting on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. The visual output related to the touch operation can be provided through the display screen 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal. In some embodiments, the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone. The audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The button 190 includes a power button, a volume button, and so on. The button 190 may be a mechanical button. It can also be a touch button. The electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate vibration prompts. The motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as photographing, audio playback, etc.) can correspond to different vibration feedback effects. Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects. Different application scenarios (for example: time reminding, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used to connect to the SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc. The same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication. In some embodiments, the electronic device 100 uses an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the present invention takes an Android system with a layered architecture as an example to exemplify the software structure of the electronic device 100.
图2是本发明实施例的电子设备100的软件结构框图。FIG. 2 is a software structure block diagram of an electronic device 100 according to an embodiment of the present invention.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程 序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system libraries, and the kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in Figure 2, the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer. The application framework layer includes some predefined functions.
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in Figure 2, the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。The window manager is used to manage window programs. The window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。The content provider is used to store and retrieve data and make these data accessible to applications. The data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls that display text and controls that display pictures. The view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction. For example, the notification manager is used to notify the download completion, message reminder, etc. The notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in a virtual machine. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。The system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
2D图形引擎是2D绘图的绘图引擎。The 2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。In the following, the workflow of the software and hardware of the electronic device 100 will be exemplified in conjunction with capturing a photo scene.
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在 内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。When the touch sensor 180K receives a touch operation, the corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.). The original input events are stored in the kernel layer. The application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. The camera 193 captures still images or videos.
本申请实施例提供了一种多终端的多媒体数据通信方法和系统,可利用无线连接(如Wi-Fi、蓝牙、移动通信系统等)实现多个终端设备之间多媒体数据流低时延同步多点传输。本申请实施例中的移动终端设备不限类型,可以是移动手机、便携式电脑、掌上电脑(personal digital assistant,PDA)、平板电脑、智能电视、智能音箱、PC、智能家居、无线终端设备、通信设备等电子设备。本申请实施例中的多终端的多媒体数据通信方法,可以用于多机播放、多机K歌,多机通话,多机录音等应用场景。上述基于多终端的多媒体数据通信方法的多媒体应用,如多机播放、多机K歌,多机通话,多机录音等,包括本地应用和异地应用两种方式。本申请实施例对设备类型、组网设备数量无特殊要求,只要是具备WiFi联网能力的智能设备均可以支持。The embodiments of this application provide a multi-terminal multimedia data communication method and system, which can use wireless connections (such as Wi-Fi, Bluetooth, mobile communication systems, etc.) to achieve low-latency synchronization of multimedia data streams between multiple terminal devices. Point transmission. The types of mobile terminal devices in the embodiments of this application are not limited, and can be mobile phones, portable computers, PDAs (personal digital assistants), tablet computers, smart TVs, smart speakers, PCs, smart homes, wireless terminal devices, communications Equipment and other electronic equipment. The multi-terminal multimedia data communication method in the embodiment of the present application can be used in application scenarios such as multi-machine playback, multi-machine K song, multi-machine conversation, and multi-machine recording. The above-mentioned multimedia applications based on the multi-terminal multimedia data communication method, such as multi-machine playback, multi-machine karaoke, multi-machine conversation, multi-machine recording, etc., include local applications and remote applications. The embodiments of this application have no special requirements on the device type and the number of networking devices, as long as it is a smart device with WiFi networking capabilities, it can be supported.
示例性的,图3示出了一种多终端的多媒体数据通信的本地应用的示意图。如图3所示,三个从机与一个主机进行连接,实现多媒体共享,本实施例中三个从机仅为示意作用,在实际使用中可以是4个或更多个从机与主机连接。其中,多个从机可以采集各自用户的声音信息,例如,话筒从机1可以采集用户1的声音信息、话筒从机2可以采集用户2的声音信息、话筒从机3可以采集用户3的声音信息。为实现多媒体数据共享,主机和从机需要先进行组网,本地应用组网时,主机界面可以显示二维码提供连接参数,或通过启动NFC等其他短距通信功能(如WiFi、蓝牙等)提供连接参数以发起组网。该二维码中可以携带有WiFi名称、WiFi密码、主电子设备的地址和端口号。从机通过扫描二维码或者NFC触碰或其他短距通信方式获取连接参数并加入组网,建立连接。Exemplarily, FIG. 3 shows a schematic diagram of a local application of multi-terminal multimedia data communication. As shown in Figure 3, three slaves are connected to a master to realize multimedia sharing. In this embodiment, the three slaves are only for illustrative purposes. In actual use, 4 or more slaves can be connected to the master. . Among them, multiple slaves can collect the voice information of their respective users. For example, the microphone slave 1 can collect the voice information of user 1, the microphone slave 2 can collect the voice information of user 2, and the microphone slave 3 can collect the voice of user 3. information. In order to realize multimedia data sharing, the host and the slave need to be networked first. When the local application is networked, the host interface can display a QR code to provide connection parameters, or activate other short-distance communication functions such as NFC (such as WiFi, Bluetooth, etc.) Provide connection parameters to initiate networking. The QR code may carry the WiFi name, WiFi password, address and port number of the main electronic device. The slave machine obtains the connection parameters by scanning the QR code or NFC touch or other short-distance communication methods and joins the network to establish a connection.
智能手机、智能电视和智能音箱均可作为主机。协同K歌应用时,主机从云端下载K歌伴奏和视频画面,将各从机发送过来的人声与伴奏混音,消除啸叫并播放。多人通话或多人会议讲话应用时,主机可从云端下载背景音乐,将各从机发送过来的人声与背景音乐混音,消除啸叫并播放。Smart phones, smart TVs and smart speakers can all be used as hosts. When cooperating with karaoke applications, the host downloads karaoke accompaniment and video images from the cloud, and mixes the vocal and accompaniment sent from each slave to eliminate howling and play. During multi-person calls or multi-person conference speech applications, the host can download background music from the cloud, mix the human voice sent from each slave with the background music, eliminate howling and play it.
移动终端如智能手机、平板、智能手表、可穿戴设备等均可作为从机。协同K歌应用时,从机能够作为话筒拾取用户歌唱的声音或讲话语音并通过WiFi等短距通信方式发送给主机。另一方面,从机也接收来自主机的下行数据流,从而显示K歌的画面和歌词。在一种可能的实现方式中,从机除了可以显示K歌的画面和歌词,还可以对主机发送过来的人声和伴奏混音后的音频进行播放,当从机连接耳机时默认打开此功能。Mobile terminals such as smart phones, tablets, smart watches, wearable devices, etc. can be used as slaves. In the collaborative K song application, the slave can be used as a microphone to pick up the user's singing or speech and send it to the host via short-distance communication such as WiFi. On the other hand, the slave also receives the downstream data stream from the master to display the K song's picture and lyrics. In a possible implementation, in addition to displaying the K song and lyrics, the slave can also play the mixed audio of the vocal and accompaniment sent by the master. This function is turned on by default when the slave is connected to the headset .
其中,主机和从机均可连接各种外设。如音箱(有线或蓝牙)或扩音设备用于扩大声音,也可以连接耳机。Among them, the host and slave can be connected to various peripherals. For example, speakers (wired or Bluetooth) or amplifying devices are used to amplify the sound, and headphones can also be connected.
示例性的,图4示出了多终端的多媒体数据通信的异地应用的示意图。如图4所示,四个从机与一个主机进行连接,实现多媒体共享,其中主机和用户1、用户2在地点A,用户3在地点B,用户4在地点C,此时主机和从机位于相同或不同地域。为实现多媒体文件共享,主机和从机需要先进行组网。对于本地从机,可以通过WiFi、蓝牙或NFC与主机连接。主机可以显示二维码提供连接参数或者通过启动NFC等短距通信功能提供连接参数以发起组网,从机通过扫二维码或者NFC等其他短距通信方式获得组网信息加入组网。对于异地从机,可以通过WiFi或数据通信网(2G/3G/4G/5G)与主机连接。主机可以将二维码发送给异地从机,或者发送组网信息给异地从机,该二维码或组网信息中携带有主机的IP地址和端口号。 异地从机通过识别二维码或者解析组网信息参与组网,从而连接到主机。Exemplarily, FIG. 4 shows a schematic diagram of a remote application of multi-terminal multimedia data communication. As shown in Figure 4, four slaves are connected to a master to realize multimedia sharing. The master and user 1 and user 2 are at location A, user 3 is at location B, and user 4 is at location C. At this time, the master and slaves Located in the same or different regions. In order to realize multimedia file sharing, the host and slave need to be networked first. For local slaves, you can connect to the master via WiFi, Bluetooth or NFC. The host can display the QR code to provide connection parameters or provide connection parameters by activating short-range communication functions such as NFC to initiate networking, and the slave can scan the QR code or other short-range communication methods such as NFC to obtain networking information to join the network. For remote slaves, you can connect to the host via WiFi or a data communication network (2G/3G/4G/5G). The host can send the QR code to the remote slave, or send networking information to the remote slave, and the QR code or networking information carries the host's IP address and port number. The remote slave machine participates in the networking by identifying the QR code or analyzing the networking information, thereby connecting to the host.
智能手机、智能电视和智能音箱均可作为主机。主机从云端下载K歌伴奏和视频画面或背景音,将各从机发送过来的人声与伴奏或背景音混音,消除啸叫并在本地播放。同时,主机将混音后的音频流和视频流通过WiFi等短距通信方式发送给本地从机。主机将混音后的音频流和视频流通过WiFi或数据通信网发送给异地从机。Smart phones, smart TVs and smart speakers can all be used as hosts. The host downloads K song accompaniment and video screen or background sound from the cloud, and mixes the vocals sent from each slave with the accompaniment or background sound to eliminate howling and play it locally. At the same time, the master sends the mixed audio stream and video stream to the local slave through short-distance communication such as WiFi. The host sends the mixed audio and video streams to remote slaves via WiFi or data communication network.
移动设备作为从机,能够作为话筒拾取用户的声音信息并发送给主机。另一方面,从机也接收来自主机的下行数据流,如K歌的画面、歌词或会议讲话时的伴奏。As a slave, the mobile device can be used as a microphone to pick up the user's voice information and send it to the host. On the other hand, the slave also receives the downstream data stream from the master, such as K song pictures, lyrics or accompaniment during conference speech.
本地话筒从机,能够作为话筒,拾取用户歌唱的声音或讲话语音并通过WiFi等短距通信方式发送给主机。另一方面,接收来自主机的下行数据流,从而显示K歌的画面和歌词。The local microphone slave can be used as a microphone to pick up the user’s singing or speech and send it to the host via short-distance communication such as WiFi. On the other hand, it receives the downstream data stream from the host to display the K song's picture and lyrics.
异地话筒从机,能够作为话筒,拾取用户的声音信息并通过数据通信网发送给主机。另一方面,接收来自主机的下行数据流,从而播放包含有伴奏及自己和其他用户的声音信息的音频,显示K歌的画面和歌词,或在在通话和会议时使其用户听到其他用户的语音。The remote microphone slave can be used as a microphone to pick up the user's voice information and send it to the host through the data communication network. On the other hand, it receives the downstream data stream from the host, so as to play the audio containing the accompaniment and the sound information of itself and other users, display the picture and lyrics of K song, or make the user hear other users during calls and meetings Voice.
其中,主机和从机均可连接各种外设。如音箱(有线或蓝牙)或扩音设备用于扩大声音,也可连接耳机。Among them, the host and slave can be connected to various peripherals. For example, speakers (wired or Bluetooth) or amplifying devices are used to amplify the sound, and headphones can also be connected.
为了保证良好的用户体验,主机需要控制主机和各从机音视频播放的同步,主机需要通过确定主机和各从机之间的时钟偏差,来确定各从机播放音视频的起始播放时刻。如:主机通过确定主机与从机1之间的第一时钟偏差,确定从机1播放多媒体数据的第一起始播放时刻,该第一起始播放时刻用于表示从机播放多媒体数据的起始时刻。主机通过主机与从机2的第二时钟偏差,确定从机2播放多媒体数据的第二起始播放时刻,该第二起始播放时刻用于表示从机2播放多媒体数据的起始时刻。因此主机可以通过主机播放多媒体数据的起始时刻和第一时钟偏差、第二时钟偏差,调整从机1和第从机2播放多媒体数据的起始播放时刻,使得主机和从机1、从机2同步播放该多媒体数据。In order to ensure a good user experience, the host needs to control the synchronization of the audio and video playback between the host and each slave. The host needs to determine the starting time of each slave to play the audio and video by determining the clock deviation between the host and each slave. For example, the master determines the first initial playback time of the multimedia data played by the slave 1 by determining the first clock deviation between the master and the slave 1, and the first initial playback time is used to indicate the start time of the multimedia data played by the slave . The master determines the second initial playback time of the multimedia data played by the slave 2 through the second clock deviation between the master and the slave 2, and the second initial playback time is used to indicate the start time of the slave 2 to play the multimedia data. Therefore, the master can adjust the initial playback time of the multimedia data played by the slave 1 and the second slave 2 through the start time of the master playing multimedia data and the first clock deviation and the second clock deviation, so that the master and the slave 1, the slave 2Play the multimedia data synchronously.
需要说明的是,时钟偏差也可以由从机确定。此时主机将自己播放多媒体数据的起始时刻发送给从机1和从机2。从机1确定主机和从机1之间的第一时钟偏差,并根据主机播放多媒体数据的起始时刻和第一时钟偏差确定从机1播放多媒体数据的起始时刻。同样的,从机2确定主机和从机2之间的第二时钟偏差,并根据主机播放多媒体数据的起始时刻和第二时钟偏差确定从机2播放多媒体数据的起始时刻。It should be noted that the clock deviation can also be determined by the slave. At this time, the host sends the starting time of playing multimedia data to the slave 1 and the slave 2. The slave 1 determines the first clock deviation between the master and the slave 1, and determines the start time of the multimedia data played by the slave 1 according to the start time of the master playing multimedia data and the first clock deviation. Similarly, the slave 2 determines the second clock deviation between the master and the slave 2, and determines the start time of the multimedia data played by the slave 2 according to the start time of the master playing multimedia data and the second clock deviation.
再一种可能的情况,主机确定时钟偏差,并将自己的起始播放时刻和时钟偏差发送给从机,由从机确定起始播放时刻。主机确定主机和从机1之间的第一时钟偏差;主机确定主机和从机2之间的第二时钟偏差;主机将主机播放多媒体数据的起始时刻和第一时钟偏差发送给从机1,从机1根据主机播放多媒体数据的起始时刻和第一时钟偏差确定从机1播放多媒体数据的起始时刻;主机将主机播放多媒体数据的起始时刻和第二时钟偏差发送给从机2,从机2根据主机播放多媒体数据的起始时刻和第二时钟偏差确定从机2播放多媒体数据的起始时刻。In another possible situation, the host determines the clock deviation, and sends its own start playing time and clock deviation to the slave, and the slave determines the start playing time. The host determines the first clock deviation between the host and the slave 1; the host determines the second clock deviation between the host and the slave 2; the host sends the start time and the first clock deviation of the multimedia data to the slave 1 , The slave 1 determines the start time of the multimedia data played by the slave 1 according to the start time of the master playing the multimedia data and the first clock deviation; the master sends the start time of the multimedia data played by the host and the second clock deviation to the slave 2 The slave 2 determines the start time of the multimedia data played by the slave 2 according to the start time of the host playing the multimedia data and the second clock deviation.
接下来对本申请提供的一种多终端的多媒体数据通信系统的软硬件系统结构进行介绍。多机播放、多机K歌,多机通话,多机录音等应用均可以通过本申请提供的多终端的多媒体数据通信系统的软硬件系统结构实现。本申请实施例实现多终端协同播放应用系统的软硬件系统架构是统一的,所以一台特定设备即可以作为主机发起组网,也可以作为从机加入组网。作为主机或者作为从机时,区别仅是特定的功能模块是否打开。Next, the software and hardware system structure of a multi-terminal multimedia data communication system provided by this application is introduced. Applications such as multi-machine playback, multi-machine K song, multi-machine conversation, and multi-machine recording can all be implemented through the software and hardware system structure of the multi-terminal multimedia data communication system provided in this application. The embodiment of the application realizes that the software and hardware system architecture of the multi-terminal cooperative playing application system is unified, so a specific device can be used as a master to initiate networking, or as a slave to join the networking. When acting as a master or as a slave, the only difference is whether a specific function module is turned on.
示例性的,图5示出了多终端的多媒体数据通信系统主机的软硬件系统架构500的示意图。主机系统架构500可以包括多机协同从机数据接收模块510,多机协同下行数据接收模 块520,音频解码模块530,抗啸叫模块540,混音模块550,音效处理模块551,协同音频输出控制模块560,多机协同数据发送模块570,蓝牙协议栈571,USB接口572,录音算法模块573,Codec574,显示接口575,显示器580,协同APP581,Modem接口590,WiFi协议栈/芯片591,蓝牙芯片/天线592,TypeC数字接口593,3.5mm耳机座/TypeC模拟接口594,听筒或扬声器595,MIC阵列596。其中协同音频输出控制模块560可以包括采样率与位宽转换560A,音量控制560B,声道选择560C,声道合并560D等。其中协同APP可以包括多机播放581A,多机K歌581B,多机通话581C,多机录音581D等。Exemplarily, FIG. 5 shows a schematic diagram of a hardware and software system architecture 500 of a multi-terminal multimedia data communication system host. The host system architecture 500 may include a multi-machine cooperative slave data receiving module 510, a multi-machine cooperative downlink data receiving module 520, an audio decoding module 530, an anti-howling module 540, a sound mixing module 550, a sound effect processing module 551, and a cooperative audio output control Module 560, multi-machine cooperative data sending module 570, Bluetooth protocol stack 571, USB interface 572, recording algorithm module 573, Codec574, display interface 575, display 580, cooperative APP581, Modem interface 590, WiFi protocol stack/chip 591, Bluetooth chip /Antenna 592, TypeC digital interface 593, 3.5mm headphone jack/TypeC analog interface 594, earpiece or speaker 595, MIC array 596. The cooperative audio output control module 560 may include sampling rate and bit width conversion 560A, volume control 560B, channel selection 560C, channel combination 560D, and so on. The collaborative APP can include multi-machine play 581A, multi-machine K song 581B, multi-machine call 581C, multi-machine recording 581D, etc.
可以理解的是,本申请实施例示意的结构并不构成对多终端的多媒体数据通信系统主机的软硬件系统架构500的具体限定。在本申请另一些实施例中,多终端的多媒体数据通信系统主机的软硬件系统架构500可以包括比图示更多或更少的模块或部件。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the hardware and software system architecture 500 of the multi-terminal multimedia data communication system host. In other embodiments of the present application, the hardware and software system architecture 500 of the multi-terminal multimedia data communication system host may include more or fewer modules or components than shown. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
具体的,各模块之间的功能如下:Specifically, the functions between the modules are as follows:
多机协同从机数据接收模块510用于以极低时延接收各从机发送过来的上行数据。The multi-machine coordinated slave data receiving module 510 is used to receive the uplink data sent from each slave with a very low time delay.
下行数据接收模块520用于以极低时延从云端接收K歌伴奏和画面视频,以及读取本地存储中的K歌伴奏和画面视频。The downlink data receiving module 520 is configured to receive K song accompaniment and screen video from the cloud with extremely low latency, and read K song accompaniment and screen video in local storage.
音频解码模块530用于对音频数据流进行解码。如果是声道编码格式,则输出各个声道的PCM码流。如果是音频对象编码格式,则输出音频对象信息和声道PCM码流。The audio decoding module 530 is used to decode the audio data stream. If it is a channel encoding format, output the PCM code stream of each channel. If it is an audio object encoding format, output audio object information and channel PCM code stream.
抗啸叫模块540用于消除混进主机MIC和从机MIC的回声,避免啸叫,抗啸叫参考信号由音频解码模块提供。The anti-howling module 540 is used to eliminate the echo mixed into the host MIC and the slave MIC and avoid howling. The anti-howling reference signal is provided by the audio decoding module.
混音模块550用于将上行人声与伴奏音混合。The sound mixing module 550 is used for mixing the upward human voice with the accompaniment sound.
音效处理551模块用于渲染效果,如混响和3D环绕听感等。The sound effect processing module 551 is used for rendering effects, such as reverberation and 3D surround sound.
需要说明的是,通过混音模块550处理后的声道PCM码流可以直接传送到协同音频输出控制模块560中,此外,用户可以选择对混音处理后的声道PCM码流进行进一步的音效处理551,得到渲染后的声道PCM码流再传出到协同音频输出控制模块中。It should be noted that the channel PCM code stream processed by the mixing module 550 can be directly transmitted to the collaborative audio output control module 560. In addition, the user can choose to perform further sound effects on the channel PCM code stream after the mixing process. In processing 551, the rendered channel PCM code stream is obtained and then transmitted to the cooperative audio output control module.
协同音频输出控制模块560具备多个输入端,可以接收多个模块的输入如:多机协同下行数据接收模块520,混音模块550和音效处理模块551。同时,其内部具备以下低时延子功能模块:The cooperative audio output control module 560 has multiple input terminals, and can receive inputs from multiple modules, such as a multi-machine cooperative downlink data receiving module 520, a sound mixing module 550, and a sound effect processing module 551. At the same time, it has the following low-latency sub-function modules inside:
采样率与位宽转换560A:转换采样率,使采样率和位宽满足输出的要求;Sampling rate and bit width conversion 560A: Convert the sampling rate so that the sampling rate and bit width meet the output requirements;
音量控制560B:用于控制输出的音量;Volume control 560B: used to control the output volume;
声道选择560C:用于选择输出那个声道;Channel selection 560C: used to select which channel to output;
声道合并560D:用户将各个声道合并成一个声道播放以增大音量。Channel Merging 560D: The user merges each channel into one channel for playback to increase the volume.
协同音频输出控制模块560还能够将音频流输入到多机协同数据发送模块570,蓝牙协议栈571,USB接口572,录音算法573和Codec574。此外,为了保持媒体图像和声音的同步,协同音频输出控制模块570将时延值发送给显示接口575,以便能够对齐时延,实现伴奏和歌词同步。The cooperative audio output control module 560 can also input audio streams to the multi-machine cooperative data sending module 570, the Bluetooth protocol stack 571, the USB interface 572, the recording algorithm 573 and the Codec574. In addition, in order to maintain the synchronization of the media image and sound, the cooperative audio output control module 570 sends the delay value to the display interface 575 so as to be able to align the delay and achieve synchronization of accompaniment and lyrics.
协同APP581用于在屏幕上显示协同功能和接收用户输入。用户通过协同APP的操作界面控制多机协同数据发送570和协同音频输出控制560模块。The collaboration APP 581 is used to display collaboration functions on the screen and receive user input. The user controls the multi-machine coordinated data transmission 570 and coordinated audio output control 560 modules through the operation interface of the cooperative APP.
多机协同数据发送模块570与Modem接口590和WiFi协议栈/芯片591对接,能将音频流、视频流以极低时延发送至各从机。The multi-machine cooperative data sending module 570 is docked with the Modem interface 590 and the WiFi protocol stack/chip 591, and can send audio streams and video streams to each slave with extremely low delay.
在另一些实施例中,还可以通过蓝牙协议栈571与蓝牙芯片/天线593对接,USB接口572与TypeC(数字)593对接将音频流、视频流发送至各从机。In other embodiments, the Bluetooth protocol stack 571 can also be docked with the Bluetooth chip/antenna 593, and the USB interface 572 can be docked with the TypeC (digital) 593 to send audio streams and video streams to each slave.
Codec574与听筒或扬声器595和3.5mm耳机座/TypeC(模拟)595对接,用于将经过混音550或经过音效551处理后的PCM码流进行播放。在一些实施例中,主机也可以作为话筒,如智能手机作为主机时,同时可作为话筒接收用户歌声,此时Codec574与MIC阵列对接,用于接收主机话筒接收的用户输入。Codec574 is connected with earpiece or speaker 595 and 3.5mm earphone holder/TypeC (analog) 595, and is used to play the PCM code stream that has been mixed 550 or processed by sound effect 551. In some embodiments, the host can also be used as a microphone. For example, when a smart phone is used as a host, it can also be used as a microphone to receive user singing. At this time, the Codec574 is docked with the MIC array to receive user input received by the host microphone.
示例性的,图6示出了多终端的多媒体数据通信系统从机的软硬件系统架构600的示意图。Exemplarily, FIG. 6 shows a schematic diagram of a software and hardware system architecture 600 of a slave of a multi-terminal multimedia data communication system.
其中,多机协同主机数据接收模块用于以极低时延接收主机发送出来的媒体数据流Among them, the multi-machine cooperative host data receiving module is used to receive the media data stream sent by the host with a very low delay
从机MIC610连接Codec620(或其他接口芯片),通过协同音频输出控制进入多机协同数据发送模块从而发送给主机。The slave MIC610 is connected to the Codec620 (or other interface chip), and enters the multi-machine cooperative data sending module through cooperative audio output control to send to the host.
为便于理解,以协同K歌为例对本实施例提出一种多终端的多媒体数据通信系统的主机从机的数据交互流程进行介绍,协同K歌应用中主机可以是智能电视或智能手机,从机为智能手机,主机和从机均可连接外设音箱用于放大音箱,从机也可连接耳机。因此协同K歌可以分为四种模式:TV(主机)+多个本地从机、TV(主机)+多个本地和异地从机、智能手机(作为主机和话筒)+多个本地从机、智能手机(作为主机和话筒)+多个本地和异地从机。For ease of understanding, the collaborative K song is taken as an example to introduce the data interaction process between the host and the slave of the multi-terminal multimedia data communication system proposed in this embodiment. The host in the collaborative K song application can be a smart TV or a smart phone. For smart phones, both the master and slave can be connected to external speakers for amplifying speakers, and the slave can also be connected to headphones. Therefore, cooperative karaoke can be divided into four modes: TV (master) + multiple local slaves, TV (master) + multiple local and remote slaves, smart phone (as master and microphone) + multiple local slaves, Smart phone (as master and microphone) + multiple local and remote slaves.
示例性的,图7示出了智能电视(TV)作为主机,多个智能手机作为从机时,主机与从机的数据交互流程。Exemplarily, FIG. 7 shows a data interaction process between the host and the slave when a smart TV (TV) is used as a master and multiple smart phones are used as slaves.
步骤701:从机选择歌曲,将歌曲信息通过WiFi或蓝牙等短距通信方式传递给主机。Step 701: The slave selects a song, and the song information is transmitted to the host via a short-distance communication method such as WiFi or Bluetooth.
步骤702:多机协同下行数据接收模块,以极低时延从云端接收K歌伴奏和画面视频,以及读取本地存储中的K歌伴奏和画面视频,并将音频数据流发送给音频解码模块,将能够直接播放的音频数据流发送给协同音频输出控制模块,将视屏数据流发送给显示接口。Step 702: Multi-machine coordinated downlink data receiving module to receive K song accompaniment and screen video from the cloud with extremely low latency, and read K song accompaniment and screen video in local storage, and send the audio data stream to the audio decoding module , Send the audio data stream that can be played directly to the cooperative audio output control module, and send the video data stream to the display interface.
步骤703:主机将K歌画面和歌词通过多机协同数据发送模块发送给从机,从机端获取到歌曲的歌词和视频等信息。Step 703: The host sends the K song picture and lyrics to the slave through the multi-machine collaborative data sending module, and the lyrics and video information of the song are obtained from the machine.
步骤704:音频解码模块对音频数据流进行解码,显示接口将视屏数据流发送给显示器进行显示。通过音频解码模块处理后的伴奏音频作为抗啸叫参考信号发送到抗啸叫模块,同时发送给混音模块和音效处理模块。Step 704: The audio decoding module decodes the audio data stream, and the display interface sends the video data stream to the display for display. The accompaniment audio processed by the audio decoding module is sent to the anti-howling module as an anti-howling reference signal, and at the same time to the mixing module and the sound effect processing module.
步骤705:用户进行演唱,从机MIC获取到用户的声音信息,将歌声传到多机协同从机数据接收模块。Step 705: The user sings, and the slave MIC obtains the user's voice information, and transmits the singing voice to the multi-machine coordinated slave data receiving module.
步骤706:多机协同从机数据接收模块,通过WiFi或蓝牙等短距通信方式将从机人声的MIC数据流发送给主机。由于通过从机MIC接收的人声中掺杂着电视伴奏音,因此需要抗啸叫模块进行抗啸叫处理。Step 706: The multi-machine coordinated slave data receiving module sends the MIC data stream of the slave human voice to the host through a short-distance communication method such as WiFi or Bluetooth. Since the human voice received by the slave MIC is mixed with the TV accompaniment sound, the anti-howling module is needed for anti-howling processing.
步骤707:进行抗啸叫处理,用于消除混进从机MIC的回声,避免啸叫。Step 707: Perform anti-howling processing to eliminate the echo mixed into the MIC of the slave and avoid howling.
步骤708:混音处理,通过抗啸叫处理后得到了干净人声,再通过混音模块将干净人声和伴奏音进行混合。混音处理后的声道PCM码流发送给协同音频输出控制模块。Step 708: sound mixing processing, a clean human voice is obtained after anti-howling processing, and then the clean human voice and the accompaniment sound are mixed through the mixing module. The channel PCM code stream after mixing processing is sent to the cooperative audio output control module.
可选的,步骤709:用户可以对混音后的声道PCM码流进行渲染,产生混响和3D环绕听感等。Optionally, step 709: the user can render the mixed sound channel PCM code stream to generate reverberation and 3D surround sound.
步骤710:协同音频输出控制模块可对经过混音处理后的声道PCM码流或经过音效处理后的声道PCM码进行采样率与位宽转换、音量控制、声道选择或声道合并等处理,处理后的声道PCM码流发送给Codec。此外,为了保持歌词和伴奏音乐的同步,“协同音频输出控制”将时延值发送给“显示接口”,以判断歌词和音乐的相对延迟并进行调整。Step 710: The cooperative audio output control module can perform sampling rate and bit width conversion, volume control, channel selection, or channel combination on the channel PCM code stream after mixing processing or the channel PCM code after sound effect processing, etc. Processing, the processed channel PCM code stream is sent to Codec. In addition, in order to keep the lyrics and accompaniment music synchronized, the "cooperative audio output control" sends the delay value to the "display interface" to determine and adjust the relative delay between the lyrics and the music.
步骤711:通过主机听筒或扬声器对用户演唱的歌声进行播放。Step 711: Play the singing voice sung by the user through the host receiver or speaker.
再示例性的,图8示出智能电视作为主机,移动设备作为本地从机和异地从机时,主机 与从机的交互流程。As another example, FIG. 8 shows the interaction process between the host and the slave when the smart TV is used as the host and the mobile device is used as the local slave and the remote slave.
与图7不同的是,当有异地从机加入K歌时,The difference from Figure 7 is that when a remote slave machine joins K song,
步骤801:主机和从机都可以选择歌曲,以本地从机选择歌曲为例。Step 801: Both the master and the slave can select songs. Take the local slave selection of songs as an example.
步骤802:主机WiFi或数据通信网(2G/3G/4G/5G)将K歌画面和歌词发送给异地从机,异地从机端获取到歌曲的歌词和视频等信息。Step 802: The host WiFi or data communication network (2G/3G/4G/5G) sends the K song screen and lyrics to the remote slave machine, and the remote slave obtains the song lyrics and video information.
步骤803:多机协同从机数据接收模块,通过WiFi或数据通信网(4G/5G)将异地从机人声的MIC数据流发送给主机,进行抗啸叫处理。Step 803: The multi-machine coordinated slave data receiving module sends the MIC data stream of the remote slave human voice to the master via WiFi or a data communication network (4G/5G) for anti-howling processing.
步骤804:主机的多机协同数据发送模块,将处理后的PCM码流发送给异地从机。Step 804: The multi-machine coordinated data sending module of the host sends the processed PCM code stream to the remote slave.
步骤805:异地从机听筒或扬声器对主机传过来的伴奏音进行播放,同时对本地用户和异地用户演唱的歌声进行播放。Step 805: The remote slave handset or speaker plays the accompaniment sound transmitted from the host, and simultaneously plays the singing voices sung by the local user and the remote user.
示例性的,如图9所示为移动设备即作为主机又作为话筒,多个移动设备作为本地从机时,主机与从机的交互流程。Exemplarily, as shown in FIG. 9, when a mobile device serves as both a master and a microphone, and multiple mobile devices serve as local slaves, the interaction process between the master and the slaves is shown.
与图7的区别在于,此时一个手机即作为主机有作为话筒,The difference from Figure 7 is that at this time, a mobile phone acts as a host and a microphone.
步骤901:主机MIC接收用户的声音信息。Step 901: The host MIC receives the user's voice information.
步骤902:通过主机Codec将主机MIC数据流发送到抗啸叫处理模块。Step 902: Send the host MIC data stream to the anti-howling processing module through the host Codec.
步骤903:由于手机扬声器声音比较小,当手机作为主机的时候,可以用过手机连接外接音箱等外放设备,扩大主机声音同时营造KTV效果。Step 903: Since the speaker of the mobile phone is relatively small, when the mobile phone is used as the host, the mobile phone can be used to connect external speakers such as external speakers to amplify the host sound and create a KTV effect.
示例性的,如图10所示为移动设备即作为主机又作为话筒,多个移动设备作为本地从机和异地从机时,主机与从机的交互流程,此流程可由图8、图9和图10结合得到。Exemplarily, as shown in Figure 10, when a mobile device acts as both a host and a microphone, and multiple mobile devices are used as local slaves and remote slaves, the interaction process between the host and the slaves can be seen in Figures 8, 9 and 9 Figure 10 is combined.
接下来,对多机K歌的本地应用的UI操作界面和操作流程进行具体阐述。示例性的,以智能电视作为主机,多个智能手机作为从机为例进行介绍。需要说明的是从机的个数可以是多个,本实施例以两个从机为例进行说明。Next, the UI operation interface and operation flow of the local application of multi-machine K song are explained in detail. Exemplarily, a smart TV is used as the master and multiple smart phones are used as slaves as an example for introduction. It should be noted that the number of slaves can be multiple, and this embodiment takes two slaves as an example for description.
图11示出了一种多机K歌的本地应用系统,包括主机智能电视,话筒从机1和话筒从机2,两个从机均为智能手机,此时主机和两个从机处于同一局域网内。Figure 11 shows a local application system for multi-machine K song, including a master smart TV, microphone slave 1 and microphone slave 2. Both slaves are smart phones, and the master and two slaves are in the same Within the local area network.
图12A示出了一种电视端图形用户界面(graphical user interface,GUI),该GUI为电视的网络配置界面1201。图12B示出一种手机端图形用户界面,该GUI为手机的无线局域网配置界面1202。如图12A和图12B所示,此时主机电视与两个本地手机从机均连接同一局域网。FIG. 12A shows a graphical user interface (GUI) on the TV side, and the GUI is the network configuration interface 1201 of the TV. FIG. 12B shows a graphical user interface of a mobile phone, and the GUI is a wireless local area network configuration interface 1202 of the mobile phone. As shown in Figure 12A and Figure 12B, at this time, the host TV and the two local mobile phone slaves are both connected to the same local area network.
示例性的,图13示出电视的一种GUI,该GUI可以成为电视桌面1301,当电视检测到用户控制遥控器选中电视桌面1301上的K歌应用(application,APP)的图标1302的操作后,可以启动电视版K歌应用,显示如图14所示的另一GUI,该GUI可以称为推荐界面1401。该推荐界面1401上可以包括有用于进入点歌界面的点歌台控件1402。当电视检测到用户控制遥控器点击推荐界面1401上的点歌台控件1402的操作后,显示如图15所示GUI,该GUI可以称为点歌台界面1501。该点歌台界面1501上可以包括有用于进入手机扫码点歌界面的控件1502。当电视检测到用户遥控器点击点歌界面1501上的控件1502的操作后,显示如图16所示另一GUI,该GUI可以称为手机扫码点歌界面1601,该手机扫码点歌界面1601上可以包括有点歌二维码1602。通过手机端K歌软件扫描识别手机扫码点歌界面1601上的点歌二维码1602,即可通过手机端K歌软件进行点歌,同时手机可以作为话筒接收用户的声音信息。对于本地的多个手机均可以通过扫描点歌二维码1602加入K歌,因此通过电视和手机便可构造KTV效果。Exemplarily, FIG. 13 shows a GUI of the TV. The GUI can become the TV desktop 1301. When the TV detects that the user controls the remote control to select the icon 1302 of the K song application (application, APP) on the TV desktop 1301, , You can start the TV version of the K song application, and display another GUI as shown in FIG. The recommendation interface 1401 may include a song-song console control 1402 for entering the song-song interface. When the TV detects that the user controls the remote control to click the karaoke station control 1402 on the recommendation interface 1401, a GUI as shown in FIG. 15 is displayed, and the GUI may be called the karaoke station interface 1501. The karaoke platform interface 1501 may include a control 1502 for entering the mobile phone scan code karaoke interface. When the TV detects that the user’s remote control clicks the control 1502 on the song order interface 1501, another GUI as shown in Figure 16 is displayed. This GUI can be called the mobile phone code scanning interface 1601, and the mobile phone code scanning interface 1601 1601 can include a song two-dimensional code 1602. By scanning and identifying the QR code 1602 of the song on the mobile phone's karaoke interface 1601 through the K song software on the mobile phone, the song can be ordered through the K song software on the mobile phone, and the mobile phone can be used as a microphone to receive the user's voice information. For multiple local mobile phones, K song can be added by scanning the song QR code 1602, so the KTV effect can be constructed through TV and mobile phones.
接下来对手机从机加入K歌的操作过程进行简单说明。Next, the operation process of adding K song from the mobile phone is briefly explained.
示例性的,图17中的(a)示出了手机的一种GUI,该GUI为手机的桌面1701。此时手机端与电视端安装有同一款K歌软件。当手机检测到用户点击桌面1701上的K歌应用的图标1702的操作后,可以启动K歌应用,图20中的(b)示出K歌应用的启动界面1703,成功打开K歌应用后,显示如图17中的(c)所示的另一GUI,该GUI上可以包含首页界面1704,该首页界面1704上显示有推荐信息。该GUI上还可以包括菜单栏1705,菜单栏1705中有用于进入我的界面的控件1706。当手机检测到用户点击用于进入我的界面的控件1706的操作后,显示如图18中的(a)所示的另一GUI,该GUI可以称为我的界面1801。该GUI可以包括有控件1802。当手机检测到用户点击控件1802的操作后,显示如图18中的(b)所示的另一GUI,该GUI可以包括有用于扫描二维码的控件1803。此后,若手机检测到用户点击控件1803的操作后,则可以显示如图18中的(c)所示的用于扫描二维码的界面1804。如图18中的(c)所示的,此时用于扫描二维码的界面1804正对准电视上的点歌二维码。当二维码识别成功后,显示如图19中的(a)所示GUI1901,该GUI包含有点歌台界面1902,该GUI还包含有菜单栏1903,该菜单栏1903中包括可以用于点歌的点歌台控件1904,用于用户演唱的控制器控件1905,用于显示已点歌曲的控件1906,此时用于点歌的点歌台控件1904被点亮。该点歌台界面1902包括有断开控件1908,用户可以点击断开控件1908断开手机与电视的连接。该点歌台界面1902上还可以包含有用于点歌演唱的控件1909,还可以包含有添加歌曲到已点歌曲列表的控件1910。该点歌台界面1902还包括有搜索栏1907,该搜索栏1907可以通过用户输入歌曲名、作者名进行歌曲搜索,示例性的如图19中的(a)所示,用户选择了歌名为“歌曲123”的歌曲,此时在搜索栏1907下面会出现歌曲的各个版本,示例性的,用户点击控件1909对选择的歌曲进行演唱。此时,主机电视可以用云端或本地对歌曲进行缓存,缓存成功后,主机电视显示如图19中的(b)所示,主机电视对用户选择的歌曲“歌曲123”进行播放。Exemplarily, (a) in FIG. 17 shows a GUI of a mobile phone, and the GUI is a desktop 1701 of the mobile phone. At this time, the same K song software is installed on the mobile phone and the TV. When the mobile phone detects that the user clicks the icon 1702 of the K song application on the desktop 1701, the K song application can be started. (b) in Figure 20 shows the K song application startup interface 1703. After the K song application is successfully opened, Another GUI as shown in (c) in FIG. 17 is displayed. The GUI may include a home page interface 1704, and the home page interface 1704 displays recommendation information. The GUI may also include a menu bar 1705, and the menu bar 1705 has controls 1706 for entering my interface. When the mobile phone detects that the user has clicked the operation of the control 1706 for entering my interface, another GUI as shown in (a) in FIG. 18 is displayed. This GUI may be called my interface 1801. The GUI may include controls 1802. When the mobile phone detects that the user clicks on the control 1802, another GUI as shown in (b) of FIG. 18 is displayed. The GUI may include a control 1803 for scanning a two-dimensional code. Thereafter, if the mobile phone detects that the user has clicked on the control 1803, it can display an interface 1804 for scanning a two-dimensional code as shown in (c) in FIG. 18. As shown in (c) of FIG. 18, the interface 1804 for scanning the two-dimensional code is now aligned with the two-dimensional code for song on the TV. When the QR code is successfully recognized, the GUI 1901 shown in (a) in Figure 19 is displayed. The GUI includes a Koke console interface 1902. The GUI also includes a menu bar 1903. The menu bar 1903 includes The song’s karaoke console control 1904, the controller control 1905 for the user to sing, and the control 1906 for displaying the song that has been ordered, the song karaoke console control 1904 for the song is now lit. The karaoke station interface 1902 includes a disconnect control 1908, and the user can click the disconnect control 1908 to disconnect the mobile phone from the TV. The karaoke station interface 1902 may also include a control 1909 for singing a song, and a control 1910 for adding a song to the song list. The karaoke station interface 1902 also includes a search bar 1907. The search bar 1907 can search for songs by the user’s input of the name of the song and the name of the author. For example, as shown in (a) in Figure 19, the user selects the name of the song For the song of "Song 123", various versions of the song will appear under the search bar 1907 at this time. Illustratively, the user clicks the control 1909 to sing the selected song. At this time, the host TV can cache the songs in the cloud or locally. After the cache is successful, the host TV displays as shown in (b) in Figure 19, and the host TV plays the song "Song 123" selected by the user.
此时用户可以点击手机上用于用户演唱的控制器控件,进入TV控制器界面,进行演唱。示例性的如图20中的(a)所示,当手机检测到用户点击用于用户演唱的控制器控件后,显示如图20中的(b)所示GUI,该GUI包含可以用于演唱的话筒控件2001,用于开启原唱的控件2002,用于暂停的控件2003,用于切歌的控件2004,用于重唱的控件2005,用于选择的更多的控件2006,同时还包含用于发送表情弹幕的控件2007。用户可以通过点击操作或滑动操作等方式开启话筒控件2001。如图20中的(c)所示的,此时为用户进行演唱时的GUI,手机麦克风可以接收用户歌声,在通过电视端扬声器对用户演唱的歌声进行播放。At this time, the user can click on the controller control on the mobile phone for the user to sing to enter the TV controller interface and perform the singing. Exemplarily shown in Figure 20 (a), when the mobile phone detects that the user clicks on the controller controls for the user to sing, it displays the GUI shown in Figure 20 (b), which contains the GUI that can be used for singing The microphone control 2001, the control 2002 used to start the original singing, the control 2003 used to pause, the control 2004 used to cut the song, the control 2005 used to sing again, the control used to select more 2006, and also include Control 2007 for sending emoji barrage. The user can open the microphone control 2001 by clicking or sliding. As shown in (c) in Figure 20, this is the GUI when the user is singing at this time. The microphone of the mobile phone can receive the user's singing voice and play the singing voice of the user through the TV speaker.
在另一实施例中,当手机检测到用户点击用于用户演唱的控制器控件后,进入TV控制器界面,显示如图21所示GUI,此时TV控制台界面同时显示有歌曲视频画面和歌词,方便用户使用。该GUI上包含有用于开启关闭弹幕的控件2101,当控件2101开启时,若用户点击表情弹幕,则在电视端和手机端视频画面上均会出现表情弹幕,当控件2102关闭时,若用户点击表情弹幕,则只会在电视端视屏画面上出现,手机端不显示。同样的,该GUI也可以包含用于开启原唱的控件2102,用于暂停的控件2103,用于演唱的话筒控件2104,用于切歌的控件2105,用于重唱的控件2106,用于选择的更多的控件2107,同时还包含用于发送表情弹幕的控件2108。In another embodiment, when the mobile phone detects that the user clicks on the controller control for the user to sing, it enters the TV controller interface and displays the GUI as shown in Figure 21. At this time, the TV console interface displays both the song video screen and the Lyrics are convenient for users to use. The GUI contains a control 2101 for opening and closing the barrage. When the control 2101 is turned on, if the user clicks on the emoticon barrage, the emoticon barrage will appear on both the TV and mobile video screens. When the control 2102 is closed, If the user clicks on the emoji barrage, it will only appear on the TV screen and not on the mobile phone. Similarly, the GUI can also include controls 2102 for starting the original singing, controls 2103 for pausing, microphone controls 2104 for singing, controls 2105 for cutting songs, controls 2106 for re-singing, and selection There are more controls 2107, and it also includes controls 2108 for sending emoji barrage.
示例性的,如图22中的(a)所示的,当手机检测到用户点击用于发送表情弹幕的控件后,电视主机上显示出表情弹幕,如图22中的(b)所示。示例性的,表情弹幕的出现方式可以是从电视屏幕的上下左右滑出,也可以是淡入淡出,此处不加以限定。Exemplarily, as shown in Figure 22 (a), when the mobile phone detects that the user clicks on the control used to send emoticon barrage, the TV host displays the emoticon barrage, as shown in Figure 22 (b) Show. Exemplarily, the appearance mode of the emoji barrage may be sliding out from the top, bottom, left and right of the TV screen, or may be faded in and out, which is not limited here.
示例性的,如图23中的(a)所示的,当手机检测到用户点击用于选择更多的控件后, 显示如图23中的(b)所示的GUI,该GUI可以包含扫码控件2301,用于分享的控件2302,用于编辑弹幕的控件2303。示例性的,当手机检测到用户点击用于分享的控件2302的操作户,显示如图23中的(c)所示的控件,可以将K歌音频分享到其他第三方应用上。Exemplarily, as shown in Figure 23 (a), when the mobile phone detects that the user clicks to select more controls, it displays the GUI shown in Figure 23 (b), which may include a scan Code control 2301, control 2302 for sharing, control 2303 for editing barrage. Exemplarily, when the mobile phone detects that the user clicks on the operator of the control 2302 for sharing, the control shown in (c) in FIG. 23 is displayed, and the K song audio can be shared to other third-party applications.
用于编辑弹幕的控件2303,可实现对表情弹幕的开启和关闭,同时支持用户编辑文字,发送文字弹幕。The control 2303 for editing barrage can realize the opening and closing of emoticon barrage, and supports users to edit text and send text barrage.
接下来,对多机K歌的异地应用的UI操作界面和操作流程进行简单阐述。示例性的,以智能电视作为主机,多个智能手机作为从机为例进行介绍。需要说明的是从机的个数可以是多个,本实施例以两个从机为例进行说明。Next, the UI operation interface and operation flow of the remote application of multi-machine K song are briefly explained. Exemplarily, a smart TV is used as the master and multiple smart phones are used as slaves as an example for introduction. It should be noted that the number of slaves can be multiple, and this embodiment takes two slaves as an example for description.
图24示出了一种多机K歌的异地应用系统,包括主机智能电视,本话筒从机1、异地话筒从机2和异地话筒从机3,三个从机均为智能手机。此时,主机和本地从机1在同一无线局域网内,异地从机2和异地从机3与主机不在同一地点。Figure 24 shows a remote application system for multi-machine K song, including a master smart TV, local microphone slave 1, remote microphone slave 2, and remote microphone slave 3. The three slaves are all smart phones. At this time, the host and the local slave 1 are in the same wireless LAN, and the remote slave 2 and the remote slave 3 are not in the same place with the host.
对于本地从机可以通过上述本地应用系统中扫描二维码或者NFC等其他短距通信方式的方法加入K歌,本实施例重点对异地从机加入K歌的操作流程进行介绍。示例性的,如图25所示的,此时扫码点歌界面,除了包含有点歌二维码,还包含有分享控件2501,通过该分享控件可以将K歌组网信息发送给异地从机,实现异地K歌。示例性的,当电视检测到用户控制遥控器选中电视桌面上的分享控件后,显示如图26所示GUI,此时GUI上显示有用户1的好友列表2601。为便于对好友列表2601的细节进行说明,将好友列表部分进行放大,显示如图27所示,从图27中的(a)可以看出,该好友列表包含有用于关闭分享的控件2701,用于搜索好友的控件2702,用于选择多个好友的多选控件2703,用户1的K歌好友列表,以及用于上下查看的控件2704。示例性的,当电视检测到用户控制遥控器选中多选控件2703后,显示如图27中的(b)所示GUI,此时在好友头像前出现用于选中的控件2705,该GUI上还包含用于完成选择的控件2706,以及用于取消多选的控件2707。示例性的,用户将K歌邀请发送给异地的用户2和用户3。For the local slave device, the above-mentioned local application system can scan the QR code or other short-distance communication methods to join the K song. This embodiment focuses on the operation flow of adding the K song from the remote slave device. Exemplarily, as shown in Figure 25, at this time, the QR code scanning and song ordering interface contains a sharing control 2501 in addition to the QR code of the song, through which the K song networking information can be sent to the remote slave Machine, realize remote K song. Exemplarily, when the TV detects that the user controls the remote control to select the sharing control on the TV desktop, it displays the GUI as shown in FIG. 26, and at this time, the friend list 2601 of user 1 is displayed on the GUI. In order to explain the details of the buddy list 2601, the buddy list part is enlarged, and the display is shown in Fig. 27. As can be seen from (a) in Fig. 27, the buddy list contains a control 2701 for closing sharing. A control 2702 for searching friends, a multi-select control 2703 for selecting multiple friends, a K song friend list of user 1, and a control 2704 for viewing up and down. Exemplarily, when the TV detects that the user controls the remote control to select the multi-select control 2703, it displays the GUI as shown in (b) in Figure 27. At this time, the control 2705 for selection appears in front of the friend's profile picture, and the GUI also displays Contains a control 2706 for completing selection and a control 2707 for canceling multiple selections. Exemplarily, the user sends the K song invitation to user 2 and user 3 in different places.
示例性的,当用户1发送完K歌邀请后,用户2和用户3会收到K歌邀请消息,显示如图28所示中的(a),当手机检测到用户点击K歌邀请的消息后,显示如图28中的(b)所示,主机上用于扫码点歌的二维码发送给了异地从机。示例性的,如图29所示,用户2和用户3可以通过长按或其他方式识别二维码加入K歌。需要说明的是,主机通过分享发送给从机的K歌邀请,可以是二维码或链接,可以以K歌应用中的消息形式发送给异地用户,也可以是短信形式。Exemplarily, after user 1 sends the karaoke invitation, users 2 and 3 will receive the karaoke invitation message and display (a) as shown in Figure 28. When the mobile phone detects that the user clicks on the karaoke invitation message Then, as shown in (b) in Figure 28, the QR code used to scan the code to order songs on the host is sent to the remote slave. Exemplarily, as shown in FIG. 29, user 2 and user 3 can recognize the two-dimensional code to join K song by long pressing or other means. It should be noted that the K song invitation sent by the host to the slave by sharing can be a QR code or a link, which can be sent to remote users in the form of a message in the K song application, or it can be in the form of SMS.
再示例性的,手机也可作为主机发起协同K歌。示例性的,如图30中的(a)所示的,当手机检测到用户点击桌面上的K歌应用的图标、的操作后,可以启动K歌应用,图30中的(b)示出K歌应用的启动界面,成功打开K歌应用后,手机检测到用户点击点歌控件后,显示如图30中的(c)所示GUI,该GUI上包含有用于多人协同K歌的控件3001。当手机检测到用户点击多人K歌控件3001后,显示如图31所示GUI,手机进入点歌台界面,该GUI点歌台界面下方包含有菜单栏中包含有用于点歌的扫码点歌的控件3101。当手机检测到用户点击控件3101后,显示如图31中的(b)所示GUI,该GUI用包含有点歌二维码和分享控件。对于本地用户,可以通过扫描二维码加入K歌。对于异地用户,主机可能用过分享控件将二维码分享给异地用户。For another example, the mobile phone can also be used as the host to initiate collaborative K song. Exemplarily, as shown in Figure 30 (a), when the mobile phone detects that the user clicks on the icon of the K song application on the desktop, the K song application can be started, and Figure 30 (b) shows The start interface of the K song application. After the K song application is successfully opened, the mobile phone detects that the user clicks on the song control, and displays the GUI shown in Figure 30 (c), which contains the controls for multi-person collaborative K song 3001. When the mobile phone detects that the user clicks on the multi-person K song control 3001, it displays the GUI as shown in Figure 31, and the mobile phone enters the karaoke table interface. The bottom of the GUI karaoke table interface contains the menu bar that contains the code scanning points for song ordering. Song control 3101. When the mobile phone detects that the user clicks on the control 3101, it displays a GUI as shown in (b) in Figure 31. The GUI contains a two-dimensional code for a song and a sharing control. For local users, you can add K songs by scanning the QR code. For remote users, the host may have used the sharing control to share the QR code with remote users.
本申请实施例提供的一种多终端的多媒体数据通信方法,可利用无线连接(如Wi-Fi、蓝牙、移动通信系统等)实现多个终端设备之间多媒体数据流低时延同步多点传输,能够使多个终端协同起来建立新的应用从而提供新的体验,也能够使设备间方便的互联互通和共享 多媒体内容,从而充分利用各台设备的优势功能。本申请实施例提供的一种多终端的多媒体数据通信方法,可以用于多机播放、多机K歌,多机通话,多机录音等应用场景。上述基于多终端协同播放的多媒体应用,如多机播放、多机K歌,多机通话,多机录音等,包括本地应用和异地应用两种方式。The embodiment of the application provides a multi-terminal multimedia data communication method, which can use wireless connections (such as Wi-Fi, Bluetooth, mobile communication systems, etc.) to realize low-latency simultaneous multipoint transmission of multimedia data streams between multiple terminal devices , Can make multiple terminals work together to establish new applications to provide new experiences, and can also facilitate the interconnection and sharing of multimedia content between devices, so as to make full use of the advantages of each device. The multi-terminal multimedia data communication method provided by the embodiments of the present application can be used in application scenarios such as multi-machine playback, multi-machine K song, multi-machine conversation, and multi-machine recording. The above-mentioned multimedia applications based on multi-terminal cooperative play, such as multi-machine playback, multi-machine K song, multi-machine conversation, multi-machine recording, etc., include two ways of local application and remote application.
图32为本申请实施例提供的一种终端的多媒体数据通信系统示意图,参见图32,该系统包括:主电子设备11、第一从电子设备12和第二从电子设备13,其中:FIG. 32 is a schematic diagram of a multimedia data communication system of a terminal provided by an embodiment of the application. Referring to FIG. 32, the system includes: a master electronic device 11, a first slave electronic device 12, and a second slave electronic device 13, wherein:
主电子设备用于与第一从电子设备、第二从电子设备分别建立连接;The master electronic device is used to establish connections with the first slave electronic device and the second slave electronic device respectively;
第一从电子设备用于接收第一播放指令,第一电子设备还用于将第一播放指令发送给主电子设备;The first slave electronic device is configured to receive a first play instruction, and the first electronic device is also configured to send the first play instruction to the master electronic device;
主电子设备还用于响应第一播放指令,播放第一多媒体数据,同时将第一多媒体数据的至少一部分发送给第一从电子设备、第二从电子设备,以使得主电子设备和第一从电子设备、第二从电子设备同步播放第一多媒体数据;The master electronic device is also used to respond to the first play instruction to play the first multimedia data, and at the same time send at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device Playing the first multimedia data synchronously with the first slave electronic device and the second slave electronic device;
第一从电子设备还用于接收第一人声,同时第二从电子设备用于接收第二人声;The first slave electronic device is also used to receive the first human voice, while the second slave electronic device is used to receive the second human voice;
主电子设备还用于接收第一从电子设备发送的第一人声、并接收第二从电子设备发送的第二人声,主电子设备还用于将第一人声、第二人声以及第一多媒体数据进行混合,生成第二多媒体数据并播放。The master electronic device is also used to receive the first human voice sent by the first slave electronic device and the second human voice sent by the second slave electronic device. The master electronic device is also used to transmit the first human voice, the second human voice, and The first multimedia data is mixed, and the second multimedia data is generated and played.
本申请还提供一种存储介质,包括:可读存储介质和计算机程序,所述计算机程序用于实现前述任一实施例提供的多终端的多媒体数据通信方法方法。The present application also provides a storage medium, including: a readable storage medium and a computer program, the computer program is used to implement the multi-terminal multimedia data communication method provided by any of the foregoing embodiments.
实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一可读取存储器中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储器(存储介质)包括:只读存储器(英文:read-only memory,ROM)、RAM、快闪存储器、硬盘、固态硬盘、磁带(magnetic tape)、软盘(floppy disk)、光盘(optical disc)及其任意组合All or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware. The aforementioned program can be stored in a readable memory. When the program is executed, it executes the steps of the foregoing method embodiments; and the foregoing memory (storage medium) includes: read-only memory (English: read-only memory, ROM), RAM, flash memory, hard disk, and solid state hard disk , Magnetic tape, floppy disk, optical disc, and any combination thereof
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above content is only the specific implementation of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Covered in the scope of protection of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims (17)

  1. 一种多终端的多媒体数据通信方法,其特征在于,应用于多个电子设备,所述多个电子设备包括一个主电子设备、第一从电子设备和第二从电子设备,所述方法包括:A multi-terminal multimedia data communication method, characterized in that it is applied to multiple electronic devices, the multiple electronic devices including a master electronic device, a first slave electronic device, and a second slave electronic device, and the method includes:
    所述主电子设备与所述第一从电子设备、第二从电子设备分别建立连接;Establishing connections between the master electronic device and the first slave electronic device and the second slave electronic device respectively;
    所述第一从电子设备接收第一播放指令,所述第一从电子设备将所述第一播放指令发送给所述主电子设备;The first slave electronic device receives a first play instruction, and the first slave electronic device sends the first play instruction to the master electronic device;
    所述主电子设备响应所述第一播放指令,播放所述第一多媒体数据,同时将所述第一多媒体数据的至少一部分发送给所述第一从电子设备和第二从电子设备,以使得所述主电子设备和所述第一从电子设备、第二从电子设备同步播放所述第一多媒体数据;In response to the first play instruction, the master electronic device plays the first multimedia data, and at the same time sends at least a part of the first multimedia data to the first slave electronic device and the second slave electronic device Device, so that the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data;
    所述第一从电子设备接收第一人声,同时第二从电子设备接收第二人声;The first slave electronic device receives a first human voice, while the second slave electronic device receives a second human voice;
    所述主电子设备接收所述第一从电子设备发送的所述第一人声、并接收所述第二从电子设备发送的所述第二人声,所述主电子设备将所述第一人声、第二人声以及所述第一多媒体数据进行混合,生成第二多媒体数据并播放。The master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device, and the master electronic device transmits the first human voice The human voice, the second human voice, and the first multimedia data are mixed to generate and play the second multimedia data.
  2. 根据权利要求1所述的方法,其特征在于,所述主电子设备和所述第一从电子设备、第二从电子设备同步播放所述第一多媒体数据,具体为:The method according to claim 1, wherein the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data, specifically:
    所述主电子设备确定所述主电子设备和所述第一从电子设备之间的第一时钟偏差;Determining, by the master electronic device, a first clock deviation between the master electronic device and the first slave electronic device;
    所述主电子设备确定所述主电子设备和所述第二从电子设备之间的第二时钟偏差;Determining, by the master electronic device, a second clock deviation between the master electronic device and the second slave electronic device;
    所述主电子设备根据所述第一时钟偏差,确定所述第一从电子设备播放所述第一多媒体数据的第一起始播放时刻;所述第一起始播放时刻用于表示所述第一从电子设备播放所述第一多媒体数据的起始时刻;The master electronic device determines, according to the first clock deviation, a first initial playback time at which the first slave electronic device plays the first multimedia data; the first initial playback time is used to indicate the first 1. The starting time of playing the first multimedia data from the electronic device;
    所述主电子设备根据所述第二时钟偏差,确定所述第二从电子设备播放所述第一多媒体数据的第二起始播放时刻;所述第二起始播放时刻用于表示所述第二从电子设备播放所述第一多媒体数据的起始时刻。The master electronic device determines, according to the second clock deviation, a second initial playback time at which the second slave electronic device plays the first multimedia data; the second initial playback time is used to indicate all The start time of the second slave electronic device playing the first multimedia data.
  3. 根据权利要求1或2所述的方法,其特征在于,所述主电子设备与所述第一从电子设备、第二从电子设备分别建立连接,包括:The method according to claim 1 or 2, wherein the establishment of the connection between the master electronic device and the first slave electronic device and the second slave electronic device respectively comprises:
    所述主电子设备与所述第一从电子设备处于同一无线局域网内,所述主电子设备显示一个WiFi连接标识,所述第一从电子设备通过识别所述WiFi连接标识与所述主电子设备建立连接。The master electronic device and the first slave electronic device are in the same wireless local area network, the master electronic device displays a WiFi connection identification, and the first slave electronic device recognizes the WiFi connection identification with the master electronic device establish connection.
  4. 根据权利要求1或2所述的方法,其特征在于,所述主电子设备与所述第一从电子设备、第二从电子设备分别建立连接,包括:The method according to claim 1 or 2, wherein the establishment of the connection between the master electronic device and the first slave electronic device and the second slave electronic device respectively comprises:
    所述主电子设备与所述第二从电子设备不在同一无线局域网内,所述主电子设备将组网信息发送给所述第二从电子设备,所述第二从电子设备通过解析所述组网信息与所述主电子设备建立连接。The master electronic device and the second slave electronic device are not in the same wireless local area network, the master electronic device sends networking information to the second slave electronic device, and the second slave electronic device analyzes the group The network information establishes a connection with the main electronic device.
  5. 根据权利要求4所述的方法,其特征在于,所述主电子设备将所述第一人声、第二人声以及所述多媒体数据进行混合,生成第二多媒体数据并播放之后,所述方法还包括:The method according to claim 4, wherein the main electronic device mixes the first human voice, the second human voice, and the multimedia data to generate and play the second multimedia data. The method also includes:
    所述主电子设备将所述第二多媒体数据发送给所述第二从电子设备;Sending, by the master electronic device, the second multimedia data to the second slave electronic device;
    所述主电子设备和所述第二从电子设备同步播放所述第二多媒体数据。The master electronic device and the second slave electronic device synchronously play the second multimedia data.
  6. 根据权利要求1所述的方法,其特征在于,所述第一多媒体数据的至少一部分包括所述第一多媒体数据的音频、视频或歌词。The method according to claim 1, wherein at least a part of the first multimedia data includes audio, video, or lyrics of the first multimedia data.
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1, wherein the method further comprises:
    所述主电子设备接收用户输入的第二播放指令;The main electronic device receives the second play instruction input by the user;
    所述主电子设备响应所述第二播放指令,播放第三多媒体数据,同时将所述第三多媒体数据的至少一部分发送给所述第一从电子设备和第二从电子设备,以使得所述主电子设备和所述第一从电子设备、第二从电子设备同步播放所述第三多媒体数据;Responding to the second play instruction, the master electronic device plays third multimedia data, and at the same time sends at least a part of the third multimedia data to the first slave electronic device and the second slave electronic device, So that the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the third multimedia data;
    所述第一从电子设备接收第一人声,同时第二从电子设备接收第二人声;The first slave electronic device receives a first human voice, while the second slave electronic device receives a second human voice;
    所述主电子设备接收所述第一从电子设备发送的所述第一人声、并接收所述第二从电子设备发送的所述第二人声,所述主电子设备将所述第一人声、第二人声以及所述第三多媒体数据进行混合,生成第四多媒体数据并播放。The master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device, and the master electronic device transmits the first human voice The human voice, the second human voice, and the third multimedia data are mixed to generate and play fourth multimedia data.
  8. 根据权利要求1或7所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 7, wherein the method further comprises:
    所述第一从电子设备接收第一人声,同时所述第二从电子设备接收第二人声,同时所述主电子设备接收第三人声;The first slave electronic device receives a first human voice, while the second slave electronic device receives a second human voice, while the master electronic device receives a third human voice;
    所述主电子设备接收所述第一从电子设备发送的所述第一人声、接收所述第二从电子设备发送的所述第二人声,所述主电子设备将所述第一人声、第二人声、第三人声以及所述第四多媒体数据进行混合,生成第五多媒体文件并播放。The master electronic device receives the first human voice sent by the first slave electronic device and receives the second human voice sent by the second slave electronic device, and the master electronic device sends the first human voice The second voice, the second voice, the third voice, and the fourth multimedia data are mixed to generate a fifth multimedia file and play it.
  9. 一种多终端的多媒体数据通信系统,其特征在于,所述系统包括一个主电子设备、第一从电子设备和第二从电子设备,所述系统包括:A multi-terminal multimedia data communication system, characterized in that the system includes a master electronic device, a first slave electronic device and a second slave electronic device, and the system includes:
    所述主电子设备用于与所述第一从电子设备、第二从电子设备分别建立连接;The master electronic device is used to establish connections with the first slave electronic device and the second slave electronic device respectively;
    所述第一从电子设备用于接收第一播放指令,所述第一电子设备还用于将所述第一播放指令发送给所述主电子设备;The first slave electronic device is used to receive a first play instruction, and the first electronic device is also used to send the first play instruction to the master electronic device;
    所述主电子设备还用于响应所述第一播放指令,播放所述第一多媒体数据,同时将所述第一多媒体数据的至少一部分发送给所述第一从电子设备、第二从电子设备,以使得所述主电子设备和所述第一从电子设备、第二从电子设备同步播放所述第一多媒体数据;The master electronic device is further configured to play the first multimedia data in response to the first play instruction, and at the same time send at least a part of the first multimedia data to the first slave electronic device, the second Two slave electronic devices, so that the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data;
    所述第一从电子设备还用于接收第一人声,同时第二从电子设备用于接收第二人声;The first slave electronic device is also used to receive a first human voice, while the second slave electronic device is used to receive a second human voice;
    所述主电子设备还用于接收所述第一从电子设备发送的所述第一人声、并接收所述第二从电子设备发送的所述第二人声,所述主电子设备还用于将所述第一人声、第二人声以及所述第一多媒体数据进行混合,生成第二多媒体数据并播放。The master electronic device is further configured to receive the first human voice sent by the first slave electronic device and receive the second human voice sent by the second slave electronic device, and the master electronic device also uses The first human voice, the second human voice, and the first multimedia data are mixed to generate second multimedia data and play.
  10. 根据权利要求9所述的系统,其特征在于,所述主电子设备和所述第一从电子设备、第二从电子设备同步播放所述第一多媒体数据,具体为:The system according to claim 9, wherein the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the first multimedia data, specifically:
    所述主电子设备用于确定所述主电子设备和所述第一从电子设备之间的第一时钟偏差;The master electronic device is used to determine a first clock deviation between the master electronic device and the first slave electronic device;
    所述主电子设备还用于确定所述主电子设备和所述第二从电子设备之间的第二时钟偏差;The master electronic device is also used to determine a second clock deviation between the master electronic device and the second slave electronic device;
    所述主电子设备还用于根据所述第一时钟偏差,确定所述第一从电子设备播放所述第一多媒体数据的第一起始播放时刻;所述第一起始播放时刻用于表示所述第一从电子设备播放所述第一多媒体数据的起始时刻;The master electronic device is further configured to determine, according to the first clock deviation, a first initial playback moment at which the first slave electronic device plays the first multimedia data; the first initial playback moment is used to indicate The start time of the first slave electronic device playing the first multimedia data;
    所述主电子设备还用于根据所述第二时钟偏差,确定所述第二从电子设备播放所述第一多媒体数据的第二起始播放时刻;所述第二起始播放时刻用于表示所述第二从电子设备播放所述第一多媒体数据的起始时刻。The master electronic device is further configured to determine, according to the second clock deviation, a second initial playback time at which the second slave electronic device plays the first multimedia data; the second initial playback time is used At represents the start time of the second slave electronic device playing the first multimedia data.
  11. 根据权利要求9或10所述的系统,其特征在于,所述主电子设备用于与所述第一从电子设备、第二电子设备分别建立连接,包括:The system according to claim 9 or 10, wherein the master electronic device is configured to establish connections with the first slave electronic device and the second electronic device, respectively, comprising:
    所述主电子设备与所述第一从电子设备处于同一无线局域网内,所述主电子设备用于显示一个WiFi连接标识,所述第一从电子设备用于通过识别所述WiFi连接标识与所述主电子设备建立连接。The master electronic device and the first slave electronic device are in the same wireless local area network, the master electronic device is used to display a WiFi connection identifier, and the first slave electronic device is used to identify the WiFi connection identifier and the The main electronic device establishes a connection.
  12. 根据权利要求9或10所述的系统,其特征在于,所述主电子设备用于与所述第一从电子设备、第二电子设备分别建立连接,包括:The system according to claim 9 or 10, wherein the master electronic device is configured to establish connections with the first slave electronic device and the second electronic device, respectively, comprising:
    所述主电子设备与所述第二从电子设备不在同一无线局域网内,所述主电子设备用于将组网信息发送给所述第二从电子设备,所述第二从电子设备用于通过解析所述组网信息与所述主电子设备建立连接。The master electronic device and the second slave electronic device are not in the same wireless local area network, the master electronic device is used to send networking information to the second slave electronic device, and the second slave electronic device is used to pass Analyze the networking information to establish a connection with the main electronic device.
  13. 根据权利要求12所述的系统,其特征在于,所述主电子设备将所述第一人声、第二人声以及所述多媒体数据进行混合,生成第二多媒体数据并播放之后,所述系统还包括:The system according to claim 12, wherein the main electronic device mixes the first human voice, the second human voice, and the multimedia data to generate and play the second multimedia data. The system also includes:
    所述主电子设备还用于将所述第二多媒体数据发送给所述第二从电子设备;The master electronic device is also used to send the second multimedia data to the second slave electronic device;
    所述主电子设备和所述第二从电子设备同步播放所述第二多媒体数据。The master electronic device and the second slave electronic device synchronously play the second multimedia data.
  14. 根据权利要求9所述的系统,其特征在于,所述第一多媒体数据的至少一部分包括第一多媒体数据的音频、视频或歌词。The system according to claim 9, wherein at least a part of the first multimedia data includes audio, video, or lyrics of the first multimedia data.
  15. 根据权利要求9所述的系统,其特征在于,所述系统还包括:The system of claim 9, wherein the system further comprises:
    所述主电子设备用于接收用户输入的第二播放指令;The main electronic device is used to receive a second play instruction input by a user;
    所述主电子设备还用于响应所述第二播放指令,播放第三多媒体数据,同时将所述第三多媒体数据的至少一部分发送给所述第一从电子设备和第二从电子设备,以使得所述主电子设备和所述第一从电子设备、第二从电子设备同步播放所述第三多媒体数据;The master electronic device is also used to respond to the second play instruction, play third multimedia data, and at the same time send at least a part of the third multimedia data to the first slave electronic device and the second slave An electronic device, so that the master electronic device, the first slave electronic device, and the second slave electronic device synchronously play the third multimedia data;
    所述第一从电子设备用于接收第一人声,同时第二从电子设备用于接收第二人声;The first slave electronic device is used to receive a first human voice, and the second slave electronic device is used to receive a second human voice;
    所述主电子设备还用于接收所述第一从电子设备发送的所述第一人声、并接收所述第二从电子设备发送的所述第二人声,所述主电子设备还用于将所述第一人声、第二人声以及所述第三多媒体数据进行混合,生成第四多媒体数据并播放。The master electronic device is further configured to receive the first human voice sent by the first slave electronic device and receive the second human voice sent by the second slave electronic device, and the master electronic device also uses The first human voice, the second human voice and the third multimedia data are mixed to generate and play fourth multimedia data.
  16. 根据权利要求9或15所述的系统,其特征在于,所述系统还包括:The system according to claim 9 or 15, wherein the system further comprises:
    所述第一从电子设备还用于接收所述第一人声,同时第二从电子设备还用于接收所述第二人声,同时所述主电子设备还用于接收第三人声;The first slave electronic device is also used to receive the first human voice, while the second slave electronic device is also used to receive the second human voice, and the master electronic device is also used to receive a third human voice;
    所述主电子设备还用于接收所述第一从电子设备发送的所述第一人声、接收所述第二从电子设备发送的所述第二人声,所述主电子设备还用于将所述第一人声、第二人声、第三人声以及所述第四多媒体数据进行混合,生成第五多媒体文件并播放。The master electronic device is further configured to receive the first human voice sent by the first slave electronic device and receive the second human voice sent by the second slave electronic device, and the master electronic device is also configured to The first voice, the second voice, the third voice, and the fourth multimedia data are mixed to generate a fifth multimedia file and play it.
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如权利要求1-8任一项所述的多终端的多媒体数据通信方法。A computer-readable storage medium, wherein the computer-readable storage medium includes computer instructions, which when the computer instructions run on a computer, cause the computer to execute the multiplex according to any one of claims 1-8. The multimedia data communication method of the terminal.
PCT/CN2020/096679 2019-06-19 2020-06-18 Multi-terminal multimedia data communication method and system WO2020253754A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021566505A JP7416519B2 (en) 2019-06-19 2020-06-18 Multi-terminal multimedia data communication method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910533498.0 2019-06-19
CN201910533498.0A CN112118062B (en) 2019-06-19 2019-06-19 Multi-terminal multimedia data communication method and system

Publications (1)

Publication Number Publication Date
WO2020253754A1 true WO2020253754A1 (en) 2020-12-24

Family

ID=73795677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096679 WO2020253754A1 (en) 2019-06-19 2020-06-18 Multi-terminal multimedia data communication method and system

Country Status (3)

Country Link
JP (1) JP7416519B2 (en)
CN (1) CN112118062B (en)
WO (1) WO2020253754A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041302A (en) * 2023-10-08 2023-11-10 深圳墨影科技有限公司 Operation system and method for multi-equipment collaborative planning
CN117440060A (en) * 2023-12-21 2024-01-23 荣耀终端有限公司 Communication conversion device, electronic equipment, system and method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250825B2 (en) * 2018-05-21 2022-02-15 Smule, Inc. Audiovisual collaboration system and method with seed/join mechanic
US11120782B1 (en) * 2020-04-20 2021-09-14 Mixed In Key Llc System, method, and non-transitory computer-readable storage medium for collaborating on a musical composition over a communication network
CN112927666B (en) * 2021-01-26 2023-11-28 北京达佳互联信息技术有限公司 Audio processing method, device, electronic equipment and storage medium
CN117896469B (en) * 2024-03-15 2024-05-31 腾讯科技(深圳)有限公司 Audio sharing method, device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005062290A2 (en) * 2003-12-22 2005-07-07 Patrick Cichostepski Method of broadcasting songs and system for the practice of karaoke remotely, in particular by telephone
WO2014147875A1 (en) * 2013-03-22 2014-09-25 ヤマハ株式会社 Audio data processing device and audio data communications system
CN105808710A (en) * 2016-03-05 2016-07-27 上海斐讯数据通信技术有限公司 Remote karaoke terminal, remote karaoke system and remote karaoke method
CN107396137A (en) * 2017-07-14 2017-11-24 腾讯音乐娱乐(深圳)有限公司 The method, apparatus and system of online interaction
CN107665703A (en) * 2017-09-11 2018-02-06 上海与德科技有限公司 The audio synthetic method and system and remote server of a kind of multi-user

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4219234B2 (en) 2003-08-28 2009-02-04 独立行政法人産業技術総合研究所 Karaoke system, music performance communication device, and performance synchronization method
JP2013231951A (en) 2012-04-06 2013-11-14 Yamaha Corp Acoustic data processing device and acoustic data communication system
CN103198848B (en) * 2013-01-31 2016-02-10 广东欧珀移动通信有限公司 Synchronous broadcast method and system
CN106057222B (en) * 2016-05-20 2020-10-27 联想(北京)有限公司 Multimedia file playing method and electronic equipment
CN113504851A (en) * 2018-11-14 2021-10-15 华为技术有限公司 Method for playing multimedia data and electronic equipment
CN109819306B (en) * 2018-12-29 2022-11-04 花瓣云科技有限公司 Media file clipping method, electronic device and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005062290A2 (en) * 2003-12-22 2005-07-07 Patrick Cichostepski Method of broadcasting songs and system for the practice of karaoke remotely, in particular by telephone
WO2014147875A1 (en) * 2013-03-22 2014-09-25 ヤマハ株式会社 Audio data processing device and audio data communications system
CN105808710A (en) * 2016-03-05 2016-07-27 上海斐讯数据通信技术有限公司 Remote karaoke terminal, remote karaoke system and remote karaoke method
CN107396137A (en) * 2017-07-14 2017-11-24 腾讯音乐娱乐(深圳)有限公司 The method, apparatus and system of online interaction
CN107665703A (en) * 2017-09-11 2018-02-06 上海与德科技有限公司 The audio synthetic method and system and remote server of a kind of multi-user

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041302A (en) * 2023-10-08 2023-11-10 深圳墨影科技有限公司 Operation system and method for multi-equipment collaborative planning
CN117041302B (en) * 2023-10-08 2024-01-30 深圳墨影科技有限公司 Operation system and method for multi-equipment collaborative planning
CN117440060A (en) * 2023-12-21 2024-01-23 荣耀终端有限公司 Communication conversion device, electronic equipment, system and method
CN117440060B (en) * 2023-12-21 2024-05-17 荣耀终端有限公司 Communication conversion device, electronic equipment, system and method

Also Published As

Publication number Publication date
JP7416519B2 (en) 2024-01-17
CN112118062B (en) 2022-12-30
CN112118062A (en) 2020-12-22
JP2022537012A (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN111345010B (en) Multimedia content synchronization method, electronic equipment and storage medium
CN110381197B (en) Method, device and system for processing audio data in many-to-one screen projection
WO2020238871A1 (en) Screen projection method and system and related apparatus
WO2020253754A1 (en) Multi-terminal multimedia data communication method and system
CN111316598B (en) Multi-screen interaction method and equipment
WO2021164445A1 (en) Notification processing method, electronic apparatus, and system
CN113497909B (en) Equipment interaction method and electronic equipment
CN114040242B (en) Screen projection method, electronic equipment and storage medium
WO2021233079A1 (en) Cross-device content projection method, and electronic device
WO2020224447A1 (en) Method and system for adding smart home device to contacts
CN113496426A (en) Service recommendation method, electronic device and system
CN113961157B (en) Display interaction system, display method and equipment
CN114115770B (en) Display control method and related device
WO2021233161A1 (en) Family schedule fusion method and apparatus
CN114827581A (en) Synchronization delay measuring method, content synchronization method, terminal device, and storage medium
US20240056676A1 (en) Video Recording Method and Electronic Device
CN114185503A (en) Multi-screen interaction system, method, device and medium
WO2022127670A1 (en) Call method and system, and related device
CN115016697A (en) Screen projection method, computer device, readable storage medium, and program product
CN114356195B (en) File transmission method and related equipment
CN115242994A (en) Video call system, method and device
CN114079691A (en) Equipment identification method and related device
WO2022252980A1 (en) Method for screen sharing, related electronic device, and system
WO2023093778A1 (en) Screenshot capture method and related apparatus
CN114071055B (en) Method for rapidly joining conference and related equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20825430

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021566505

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20825430

Country of ref document: EP

Kind code of ref document: A1