CN112118062B - Multi-terminal multimedia data communication method and system - Google Patents

Multi-terminal multimedia data communication method and system Download PDF

Info

Publication number
CN112118062B
CN112118062B CN201910533498.0A CN201910533498A CN112118062B CN 112118062 B CN112118062 B CN 112118062B CN 201910533498 A CN201910533498 A CN 201910533498A CN 112118062 B CN112118062 B CN 112118062B
Authority
CN
China
Prior art keywords
electronic device
slave
multimedia data
master
slave electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910533498.0A
Other languages
Chinese (zh)
Other versions
CN112118062A (en
Inventor
杨枭
黎椿键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN201910533498.0A priority Critical patent/CN112118062B/en
Priority to JP2021566505A priority patent/JP7416519B2/en
Priority to PCT/CN2020/096679 priority patent/WO2020253754A1/en
Publication of CN112118062A publication Critical patent/CN112118062A/en
Application granted granted Critical
Publication of CN112118062B publication Critical patent/CN112118062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0638Clock or time synchronisation among nodes; Internode synchronisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech

Abstract

The embodiment of the application provides a multi-terminal multimedia data communication method, relates to the technical field of multimedia, and can realize multimedia data transmission among a plurality of terminals by utilizing wireless connection. The specific scheme is as follows: the master electronic device establishes connection with the first slave electronic device and the second slave electronic device respectively. The first slave electronic equipment receives the playing instruction and sends the playing instruction to the master electronic equipment. And the master electronic equipment responds to the playing instruction, plays the first multimedia data, and simultaneously sends at least one part of the first multimedia data to the first slave electronic equipment and the second slave electronic equipment, so that the master electronic equipment, the first slave electronic equipment and the second slave electronic equipment synchronously play the first multimedia data. The first slave electronic device receives the first human voice and sends the first human voice to the master device, and the second slave electronic device receives the second human voice and sends the second human voice to the master device. And the main electronic equipment mixes the first voice, the second voice and the first multimedia data to generate and play second multimedia data.

Description

Multi-terminal multimedia data communication method and system
Technical Field
The embodiment of the application relates to the technical field of multimedia, in particular to a multi-terminal multimedia data communication method and system.
Background
With the popularization and intelligent development of mobile terminal audio devices such as mobile phones, tablet computers, personal Computers (PCs) and wireless speakers, multiple intelligent devices are commonly present in homes and connected to the same local area network in a wired or wireless manner, so that cross-device interaction and collaborative entertainment become important development directions for enhancing user entertainment experience.
In the current uplink and downlink multimedia application with multi-terminal cooperation, taking multi-computer karaoke as an example, a plurality of electronic devices (such as mobile phones) are generally included, one of the electronic devices is a host, and the other electronic devices are slaves. After the house owner establishes the K song room through the host, the KTV function is started, and other slave machines can be added into the K song room. The house owner can request songs on line, and the songs are the same as the common MV in a KTV song house and comprise pictures, subtitles and accompaniment. The audience can request for getting on the wheat from the slave computer and get through the song by the homeowner on line, and starts to play the wheat. When it is the turn to play the song that was ordered by the spectator, the spectator becomes a singer. The audiences can adjust the volume of the accompaniment and the voice through the slave computer, but the song control authority (song pause and song cutting) is still owned by the host computer. In the process, the microphones can be sequentially transmitted to different audiences with wheat, and a singer always stands alone and cannot meet the use scene that multiple people simultaneously sing with K in multiple people KTVs in real life.
Disclosure of Invention
The embodiment of the application provides a multi-terminal multimedia data communication method and system, which can enable a plurality of terminals to cooperate to establish a new application so as to provide new experience, and also enable the devices to be conveniently interconnected and intercommunicated and share multimedia contents, thereby fully utilizing the advantageous functions of each device. In addition, the invention is also beneficial to constructing an ecological and application system based on multi-machine cooperation.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides a multi-terminal multimedia data communication method, which is applied to a plurality of electronic devices, where the plurality of electronic devices include a master electronic device, a first slave electronic device, and a second slave electronic device, and the method includes: the master electronic device establishes connection with the first slave electronic device and the second slave electronic device respectively. The first slave electronic equipment receives a first playing instruction, and the first slave electronic equipment sends the first playing instruction to the master electronic equipment. The master electronic device responds to the first playing instruction, plays the first multimedia data, and simultaneously sends at least one part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the first multimedia data. The first slave electronic device receives the first voice while the second slave electronic device receives the second voice. The master electronic device receives a first voice sent by the first slave electronic device and a second voice sent by the second slave electronic device, and the master electronic device mixes the first voice, the second voice and the first multimedia data to generate and play second multimedia data.
In the scheme, the master electronic device can be connected with the plurality of slave electronic devices, and multimedia data interaction between the master electronic device and the plurality of slave electronic devices is achieved. The user can send a first play instruction to the master electronic device through the first slave electronic device. The main electronic equipment responds to the first playing instruction and plays the first multimedia data. Meanwhile, the master electronic device can transmit at least one part of the first multimedia data to the plurality of slave electronic devices, and a synchronization algorithm is used for monitoring the data synchronization condition between the master electronic device and the plurality of slave electronic devices in real time, so that the master electronic device and the plurality of slave electronic devices synchronously play the first multimedia data. The multiple slave electronic devices may then simultaneously accept the voices of the respective users, such as a first slave electronic device receiving a first voice while a second slave electronic device receives a second voice. The plurality of slave electronic devices send the received voice of each user to the master electronic device, and the master electronic device mixes the received voice with the first multimedia data, performs howling resistance and sound mixing processing, generates second multimedia data and plays the second multimedia data.
With reference to the first aspect, in a first embodiment of the first aspect, the synchronously playing, by the master electronic device, the first slave electronic device, and the second slave electronic device, the first multimedia data specifically is: the master electronic device determining a first clock offset between the master electronic device and a first slave electronic device; the master electronic device determining a second clock offset between the master electronic device and a second slave electronic device; the master electronic equipment determines a first starting playing time when the first slave electronic equipment plays the first multimedia data according to the first clock deviation; the first starting playing time is used for representing the starting time of the first slave electronic equipment for playing the first multimedia data; the master electronic equipment determines a second starting playing time of the second slave electronic equipment for playing the first multimedia data according to the second clock deviation; the second starting playing time is used for representing the starting time of the second slave electronic equipment for playing the first multimedia data.
In the scheme, in order to enable the plurality of terminal devices to play the first multimedia data synchronously, the master electronic device determines a first starting playing time of the first slave electronic device for playing the first multimedia data by determining a first clock deviation between the master electronic device and the first slave electronic device, wherein the first starting playing time is used for representing the starting time of the first slave electronic device for playing the first multimedia data. The master electronic device determines a second starting playing time of the second slave electronic device for playing the first multimedia data through a second clock deviation between the master electronic device and the second slave electronic device, wherein the second starting playing time is used for representing the starting time of the second slave electronic device for playing the first multimedia data. Therefore, the master electronic device can adjust the starting playing time of the first multimedia data played by the first slave electronic device and the second slave electronic device through the starting time of the first multimedia data played by the master electronic device, the first clock offset and the second clock offset, so that the master electronic device, the first slave electronic device and the second slave electronic device can synchronously play the first multimedia data.
It should be noted that the clock skew may also be determined by the slave electronic device. At this time, the master electronic device sends the starting time of playing the first multimedia data to the first slave electronic device and the second slave electronic device. The first slave electronic device determines a first clock offset between the master electronic device and the first slave electronic device, and determines a start time of the first slave electronic device for playing the first multimedia data according to the start time of the master electronic device for playing the first multimedia data and the first clock offset. Similarly, the second slave electronic device determines a second clock offset between the master electronic device and the second slave electronic device, and determines a start time of the second slave electronic device for playing the first multimedia data according to the start time of the master electronic device for playing the first multimedia data and the second clock offset.
In yet another possible case, the master electronic device determines a clock offset and sends its start play time and clock offset to the slave electronic device, which determines the start play time. The master electronic device determining a first clock offset between the master electronic device and a first slave electronic device; the master electronic device determining a second clock offset between the master electronic device and a second slave electronic device; the method comprises the steps that a master electronic device sends the starting time of the master electronic device for playing first multimedia data and a first clock deviation to a first slave electronic device, and the first slave electronic device determines the starting time of the first slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the first clock deviation; the master electronic device sends the starting time of the master electronic device for playing the first multimedia data and the second clock deviation to the second slave electronic device, and the second slave electronic device determines the starting time of the second slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the second clock deviation.
With reference to the first aspect or the first embodiment of the first aspect, in a second embodiment of the first aspect, the establishing, by the master electronic device, connections with the first slave electronic device and the second slave electronic device respectively includes: the master electronic device and the first slave electronic device are located in the same wireless local area network, the master electronic device displays a WiFi connection identifier, and the first slave electronic device establishes connection with the master electronic device by recognizing the WiFi connection identifier.
In this scheme, the master electronic device and the first slave electronic device are in the same wireless local area network, that is, the master electronic device and the first slave electronic device are in the same location, and then the master electronic device and the first slave electronic device establish a connection through WiFi. At this moment, the main electronic device displays a WiFi connection identifier, and WiFi networking information is carried in the WiFi connection identifier, including: wiFi name, wiFi password, address and port number of the master electronic device. The first slave electronic device may establish a connection with the master electronic device by recognizing the WiFi identification.
With reference to the first aspect or the first embodiment of the first aspect, in a third embodiment of the first aspect, the establishing, by the master electronic device, connections with the first slave electronic device and the second slave electronic device respectively includes: the master electronic device and the second slave electronic device are not in the same wireless local area network, the master electronic device sends networking information to the second slave electronic device, and the second slave electronic device establishes connection with the master electronic device by analyzing the networking information.
In the scheme, the master electronic device and the second slave electronic device can be in different geographical positions, the master electronic device and the second slave electronic device are not in the same local area network at the moment, the master electronic device can send networking information to the second slave electronic device, and the second slave electronic device establishes connection with the master electronic device through analyzing the networking information. The networking information may include: the IP address and port number of the host electronic device. The master electronic device and the second slave electronic device may establish a connection through WiFi and a data communication network.
With reference to the third embodiment of the first aspect, in a fourth embodiment of the first aspect, after the main electronic device mixes the first voice, the second voice, and the multimedia data, generates the second multimedia data, and plays the second multimedia data, the method further includes: the master electronic device sends the second multimedia data to the second slave electronic device; and the master electronic equipment and the second slave electronic equipment synchronously play the second multimedia data.
In this embodiment, since the second slave electronic device and the master electronic device are not located at the same location, after the master electronic device receives the first voice transmitted from the first slave electronic device and receives the second voice transmitted from the second slave electronic device, the first voice, the second voice, and the first multimedia data are mixed. After the second multimedia data is generated, in order to share the second multimedia data with a second slave electronic device that is not at the same location, the master electronic device transmits the second multimedia data to the second slave electronic device so that the second slave electronic device can play the second multimedia data. In this process, the master electronic device needs to monitor synchronization problems between the master electronic device and the second slave electronic device in real time, so that the master electronic device and the second slave electronic device play the second multimedia data synchronously.
With reference to the first aspect, in a fifth embodiment of the first aspect, the at least a portion of the first multimedia data comprises audio, video, or lyrics of the first multimedia data.
In this scheme, the at least a portion of the first multimedia data may be one of audio, video, lyrics, audio and video, audio and lyrics, video and lyrics, audio and video and lyrics.
With reference to the first aspect, in a sixth embodiment of the first aspect, the method further comprises: and the main electronic equipment receives a second playing instruction input by the user. And the master electronic device responds to the second playing instruction to play the third multimedia data, and simultaneously sends at least one part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the third multimedia data. The first slave electronic device receives the first voice while the second slave electronic device receives the second voice. The master electronic device receives a first voice sent by the first slave electronic device and a second voice sent by the second slave electronic device, and the master electronic device mixes the first voice, the second voice and the third multimedia data to generate and play fourth multimedia data.
In the scheme, the user can directly issue a play instruction to the main electronic device, and the main electronic device responds to the play instruction to play the third multimedia data. The third multimedia data may be the same as the first multimedia data or different from the first multimedia data.
With reference to the first aspect or the sixth embodiment of the first aspect, in a seventh embodiment of the first aspect, the method further comprises: the first slave electronic device receives the first voice, the second slave electronic device receives the second voice, and the master electronic device receives the third voice. The master electronic device receives a first voice sent by the first slave electronic device and a second voice sent by the second slave electronic device, and the master electronic device mixes the first voice, the second voice, the third voice and the fourth multimedia data to generate and play a fifth multimedia file.
In this application scheme, the master electronic device may also receive the voice of the user, and at this time, the first electronic device, the second electronic device, and the master electronic device may receive the voice of their respective users at the same time, for example, the first slave electronic device receives the first voice, the second slave electronic device receives the second voice, and the master electronic device receives the third voice. The first slave electronic equipment sends the received first voice to the master electronic equipment, and the second slave electronic equipment sends the received second voice to the master electronic equipment. And the main electronic equipment mixes the first voice, the second voice, the third voice and the fourth multimedia data to generate a fifth multimedia file and plays the fifth multimedia file.
In a second aspect, an embodiment of the present application provides a multi-terminal multimedia data communication system, including a master electronic device, a first slave electronic device, and a second slave electronic device: the master electronic device is used for establishing connection with the first slave electronic device and the second slave electronic device respectively. The first slave electronic device is used for receiving a first playing instruction, and the first electronic device is also used for sending the first playing instruction to the master electronic device. The master electronic device is further used for responding to the first playing instruction, playing the first multimedia data, and meanwhile sending at least one part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device can synchronously play the first multimedia data. The first slave electronic device is further configured to receive the first voice while the second slave electronic device is configured to receive the second voice. The master electronic device is used for receiving a first voice sent by the first slave electronic device and receiving a second voice sent by the second slave electronic device, and the master electronic device is also used for mixing the first voice, the second voice and the first multimedia data to generate and play second multimedia data.
In the scheme, the master electronic device can be connected with the plurality of slave electronic devices, and multimedia data interaction between the master electronic device and the plurality of slave electronic devices is achieved. The user can send a first play instruction to the master electronic device through the first slave electronic device. The main electronic equipment responds to the first playing instruction and plays the first multimedia data. Meanwhile, the master electronic device can transmit at least one part of the first multimedia data to the plurality of slave electronic devices, and a synchronization algorithm is used for monitoring the data synchronization condition between the master electronic device and the plurality of slave electronic devices in real time, so that the master electronic device and the plurality of slave electronic devices synchronously play the first multimedia data. The multiple slave electronic devices may then simultaneously accept the voices of the respective users, such as a first slave electronic device receiving a first voice while a second slave electronic device receives a second voice. And the plurality of slave electronic equipment sends the received voice of each user to the master electronic equipment, and the master electronic equipment mixes the received voice with the first multimedia data, performs howling resistance and sound mixing processing, generates second multimedia data and plays the second multimedia data.
With reference to the second aspect, in a first embodiment of the second aspect, the synchronously playing the first multimedia data by the master electronic device, the first slave electronic device, and the second slave electronic device specifically includes: the master electronic device is used for determining a first clock deviation between the master electronic device and the first slave electronic device; the master electronic device is further configured to determine a second clock offset between the master electronic device and a second slave electronic device; the master electronic device is further used for determining a first starting playing time when the first slave electronic device plays the first multimedia data according to the first clock deviation; the first starting playing time is used for representing the starting time of the first slave electronic equipment for playing the first multimedia data; the master electronic device is further used for determining a second starting playing time of the second slave electronic device for playing the first multimedia data according to the second clock deviation; the second starting playing time is used for representing the starting time of the second slave electronic equipment for playing the first multimedia data.
In the scheme, in order to enable the plurality of terminal devices to play the first multimedia data synchronously, the master electronic device determines a first starting playing time of the first slave electronic device for playing the first multimedia data by determining a first clock deviation between the master electronic device and the first slave electronic device, wherein the first starting playing time is used for representing the starting time of the first slave electronic device for playing the first multimedia data. The master electronic device determines a second starting playing time of the second slave electronic device for playing the first multimedia data through a second clock deviation between the master electronic device and the second slave electronic device, wherein the second starting playing time is used for representing the starting time of the second slave electronic device for playing the first multimedia data. Therefore, the master electronic device can adjust the starting playing time of the first multimedia data played by the first slave electronic device and the second slave electronic device through the starting time of the first multimedia data played by the master electronic device, the first clock offset and the second clock offset, so that the master electronic device, the first slave electronic device and the second slave electronic device can synchronously play the first multimedia data.
It should be noted that the clock skew may also be determined by the slave electronic device. At this time, the master electronic device sends the starting time of playing the first multimedia data to the first slave electronic device and the second slave electronic device. The first slave electronic device determines a first clock offset between the master electronic device and the first slave electronic device, and determines a start time of the first slave electronic device for playing the first multimedia data according to the start time of the master electronic device for playing the first multimedia data and the first clock offset. Similarly, the second slave electronic device determines a second clock offset between the master electronic device and the second slave electronic device, and determines a start time of the second slave electronic device for playing the first multimedia data according to the start time of the master electronic device for playing the first multimedia data and the second clock offset.
In yet another possible case, the master electronic device determines a clock offset and sends its start play time and clock offset to the slave electronic device, which determines the start play time. The master electronic device determining a first clock offset between the master electronic device and a first slave electronic device; the master electronic device determining a second clock offset between the master electronic device and a second slave electronic device; the method comprises the steps that a main electronic device sends the starting time of the main electronic device for playing first multimedia data and a first clock deviation to a first slave electronic device, and the first slave electronic device determines the starting time of the first slave electronic device for playing the first multimedia data according to the starting time of the main electronic device for playing the first multimedia data and the first clock deviation; the master electronic device sends the starting time of the master electronic device for playing the first multimedia data and the second clock deviation to the second slave electronic device, and the second slave electronic device determines the starting time of the second slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the second clock deviation.
With reference to the second aspect or the first embodiment of the second aspect, in a second embodiment of the second aspect, the establishing, by the master electronic device, connections with the first slave electronic device and the second slave electronic device respectively includes: the main electronic device and the first slave electronic device are located in the same wireless local area network, the main electronic device is used for displaying a WiFi connection identifier, and the first slave electronic device is used for establishing connection with the main electronic device by identifying the WiFi connection identifier.
In this scheme, the master electronic device and the first slave electronic device are in the same wireless local area network, that is, the master electronic device and the first slave electronic device are in the same location, and then the master electronic device and the first slave electronic device establish a connection through WiFi. At this moment, the main electronic device displays a WiFi connection identifier, and WiFi networking information is carried in the WiFi connection identifier, including: wiFi name, wiFi password, address and port number of the master electronic device. The first slave electronic device may establish a connection with the master electronic device by recognizing the WiFi identification.
With reference to the second aspect or the first embodiment of the second aspect, in a third embodiment of the second aspect, the establishing, by the master electronic device, connections with the first slave electronic device and the second slave electronic device respectively includes: the master electronic device and the second slave electronic device are not in the same wireless local area network, the master electronic device is used for sending networking information to the second slave electronic device, and the second slave electronic device is used for establishing connection with the master electronic device by analyzing the networking information.
In the scheme, the master electronic device and the second slave electronic device can be in different geographical positions, the master electronic device and the second slave electronic device are not in the same local area network at the moment, the master electronic device can send networking information to the second slave electronic device, and the second slave electronic device establishes connection with the master electronic device through analyzing the networking information. The networking information may include: the IP address and port number of the host electronic device. The master electronic device and the second slave electronic device may establish a connection over WiFi and a data communication network.
With reference to the third embodiment of the second aspect, in a fourth embodiment of the second aspect, after the main electronic device mixes the first person voice, the second person voice, and the multimedia data, generates the second multimedia data, and plays the second multimedia data, the system further includes: the master electronic equipment is also used for sending the second multimedia data to the second slave electronic equipment; and the master electronic equipment and the second slave electronic equipment synchronously play the second multimedia data.
In this embodiment, since the second slave electronic device and the master electronic device are not located at the same location, after the master electronic device receives the first voice transmitted from the first slave electronic device and receives the second voice transmitted from the second slave electronic device, the first voice, the second voice, and the first multimedia data are mixed. After generating the second multimedia data, in order to share the second multimedia data with a second slave electronic device that is not at the same location, the master electronic device transmits the second multimedia data to the second slave electronic device so that the second slave electronic device can play the second multimedia data. In this process, the master electronic device needs to monitor a synchronization problem between the master electronic device and the second slave electronic device in real time, so that the master electronic device and the second slave electronic device play the second multimedia data synchronously.
In a fifth embodiment of the second aspect in combination with the second aspect, the at least a portion of the first multimedia data comprises audio, video or lyrics of the first multimedia data.
In this scheme, the at least a portion of the first multimedia data may be one of audio, video, lyrics, audio and video, audio and lyrics, video and lyrics, audio and video and lyrics.
With reference to the second aspect, in a sixth embodiment of the second aspect, the system further comprises: the main electronic device is used for receiving a second playing instruction input by the user. The master electronic device is further used for responding to the second playing instruction, playing the third multimedia data, and simultaneously sending at least one part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the third multimedia data. The first slave electronic device is used for receiving the first voice, and the second slave electronic device is used for receiving the second voice. The master electronic device is further configured to receive a first voice sent by the first slave electronic device and a second voice sent by the second slave electronic device, and the master electronic device is further configured to mix the first voice, the second voice and the third multimedia data to generate fourth multimedia data and play the fourth multimedia data.
In the scheme, the user can directly issue a play instruction to the main electronic device, and the main electronic device responds to the play instruction to play the third multimedia data. The third multimedia data may be the same as the first multimedia data or different from the first multimedia data.
With reference to the second aspect or the sixth embodiment of the second aspect, in a seventh embodiment of the second aspect, the system further comprises: the first slave electronic equipment is also used for receiving the first voice, the second slave electronic equipment is also used for receiving the second voice, and the master electronic equipment is also used for receiving the third voice. The master electronic device is further configured to receive a first voice sent by the first slave electronic device and a second voice sent by the second slave electronic device, and the master electronic device is further configured to mix the first voice, the second voice, the third voice and the fourth multimedia data to generate a fifth multimedia file and play the fifth multimedia file.
In this application scheme, the master electronic device may also receive the voice of the user, and at this time, the first electronic device, the second electronic device, and the master electronic device may receive the voice of their respective users at the same time, for example, the first slave electronic device receives the first voice, the second slave electronic device receives the second voice, and the master electronic device receives the third voice. The first slave electronic device sends the received first human voice to the master electronic device, and the second slave electronic device sends the received second human voice to the master electronic device. And the main electronic equipment mixes the first voice, the second voice, the third voice and the fourth multimedia data to generate a fifth multimedia file and plays the fifth multimedia file.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes computer instructions, and when the computer instructions are executed on a computer, the computer is caused to execute a multi-terminal multimedia data communication method in any possible implementation of the foregoing aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a local application of multi-terminal multimedia data communication according to an embodiment of the present application;
fig. 4 is a schematic diagram of a multi-terminal allopatric application of multimedia data communication according to an embodiment of the present application;
fig. 5 is a hardware and software system architecture of a multi-terminal multimedia data communication system host according to an embodiment of the present disclosure;
fig. 6 is a software and hardware system architecture of a multi-terminal multimedia data communication system slave according to an embodiment of the present disclosure;
fig. 7 is a flow chart of data interaction between a master and a slave according to an embodiment of the present disclosure;
fig. 8 is a flowchart of another data interaction between a master and a slave according to an embodiment of the present disclosure;
fig. 9 is a flowchart of another host-slave data interaction provided in the embodiment of the present application;
fig. 10 is a flowchart of another set of master-slave data interaction provided in an embodiment of the present application;
fig. 11 is a schematic diagram of a local application of multiple songs in accordance with an embodiment of the present application;
FIG. 12A is a schematic view of a display interface provided by an embodiment of the present application;
FIG. 12B is a schematic view of another display interface provided in the embodiments of the present application;
FIG. 13 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 14 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 15 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 16 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 17 is a schematic view of another set of display interfaces provided in embodiments of the present application;
FIG. 18 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 19 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 20 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 21 is a schematic view of another set of display interfaces provided in an embodiment of the present application;
FIG. 22 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 23 is a schematic view of another set of display interfaces provided in an embodiment of the present application;
FIG. 24 is a schematic diagram illustrating an off-site application of multiple karaoke songs according to an embodiment of the present application;
FIG. 25 is a schematic diagram of a display interface provided in an embodiment of the present application;
FIG. 26 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 27 is a schematic view of another display interface provided in an embodiment of the present application;
FIG. 28 is a schematic view of another set of display interfaces provided by embodiments of the present application;
FIG. 29 is a schematic view of another set of display interfaces provided in accordance with an embodiment of the present application;
FIG. 30 is a schematic view of another set of display interfaces provided in accordance with an embodiment of the present application;
FIG. 31 is a schematic view of another set of display interfaces provided in embodiments of the present application;
FIG. 32 is a diagram of a multi-terminal multimedia data communication system according to an embodiment of the present application
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of answering a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit the audio signal to the wireless communication module 160 through the PCM interface, so as to implement the function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be an open mobile electronic device platform (OMTP) standard interface of 3.5mm, or a Cellular Telecommunications Industry Association (CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint characteristics to unlock a fingerprint, access an application lock, photograph a fingerprint, answer an incoming call with a fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The android runtime comprises a core library and a virtual machine. Android is responsible for the scheduling and management of the android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, timestamp of the touch operation, and the like). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of the application framework layer, starts the camera application, further starts the camera drive by calling the kernel layer, and captures a still image or a video through the camera 193.
The embodiment of the application provides a multi-terminal multimedia data communication method and system, which can realize low-delay synchronous multi-point transmission of multimedia data streams among a plurality of terminal devices by utilizing wireless connection (such as Wi-Fi, bluetooth, a mobile communication system and the like). The mobile terminal device in the embodiment of the present application is not limited to a type, and may be an electronic device such as a mobile phone, a portable computer, a Personal Digital Assistant (PDA), a tablet computer, an intelligent television, an intelligent sound box, a PC, an intelligent home, a wireless terminal device, and a communication device. The multi-terminal multimedia data communication method in the embodiment of the application can be used for application scenes such as multi-machine playing, multi-machine karaoke, multi-machine communication, multi-machine recording and the like. The multimedia applications of the multi-terminal-based multimedia data communication method, such as multi-machine playing, multi-machine karaoke, multi-machine calling, multi-machine recording and the like, comprise two modes, namely local application and remote application. The embodiment of the application has no special requirements on the type of the equipment and the number of networking equipment, and can be supported by any intelligent equipment with WiFi networking capability.
Illustratively, fig. 3 shows a schematic diagram of a native application of multi-terminal multimedia data communication. As shown in fig. 3, three slaves are connected to one master to implement multimedia sharing, in this embodiment, the three slaves are only schematic, and in actual use, 4 or more slaves may be connected to the master. The plurality of slaves can collect voice information of respective users, for example, the microphone slave 1 can collect voice information of the user 1, the microphone slave 2 can collect voice information of the user 2, and the microphone slave 3 can collect voice information of the user 3. In order to realize multimedia data sharing, the host and the slave need to be networked first, and when networking is applied locally, the interface of the host can display the two-dimensional code to provide connection parameters, or the host can provide connection parameters by starting other short-distance communication functions (such as WiFi, bluetooth and the like) such as NFC to initiate networking. The two-dimensional code may carry a WiFi name, a WiFi password, an address and a port number of the main electronic device. And the slave machine acquires connection parameters by scanning the two-dimensional code or NFC touch or other short-distance communication modes, joins networking and establishes connection.
The smart phone, the smart television and the smart sound box can be used as hosts. When the cooperative K song is applied, the host downloads the K song accompaniment and the video picture from the cloud, mixes the voice and the accompaniment sound sent by the slave machines, eliminates howling and plays the voice and the accompaniment sound. When the multi-person conversation or multi-person conference conversation is applied, the host can download background music from the cloud, and mix the voice sent by each slave with the background music to eliminate howling and play the voice.
Mobile terminals such as smart phones, tablets, smart watches, wearable devices, etc. can all be used as slaves. When the device is applied in cooperation with karaoke, the slave machine can be used as a microphone to pick up singing voice or speaking voice of a user and send the singing voice or speaking voice to the host machine in a short-distance communication mode such as WiFi. On the other hand, the slave computer also receives a downstream data stream from the host computer, thereby displaying pictures and lyrics of the K songs. In a possible implementation mode, the slave computer can display the picture and lyrics of the K song, and can play the voice sent by the host computer and the audio mixed with the accompaniment, and the function is turned on by default when the slave computer is connected with the earphone.
The master computer and the slave computer can be connected with various peripheral devices. Such as a speaker (wired or bluetooth) or a public address device for amplifying sound, or an earphone.
Illustratively, fig. 4 shows a schematic diagram of a placeshifting application of multi-terminal multimedia data communication. As shown in fig. 4, four slaves are connected to a master to implement multimedia sharing, where the master and the user 1, the user 2 are at location a, the user 3 is at location B, and the user 4 is at location C, and at this time, the master and the slaves are located in the same or different regions. In order to realize multimedia file sharing, the host and the slave need to be networked first. For local slaves, connection with the master can be made via WiFi, bluetooth or NFC. The host can display the two-dimension code to provide connection parameters or provide connection parameters by starting short-distance communication functions such as NFC and the like to initiate networking, and the slave acquires networking information by scanning the two-dimension code or other short-distance communication modes such as NFC and the like to join networking. For the remote slave, the remote slave can be connected with the host through WiFi or a data communication network (2G/3G/4G/5G). The host can send the two-dimensional code to a different-place slave machine or send networking information to the different-place slave machine, wherein the two-dimensional code or the networking information carries the IP address and the port number of the host machine. The remote slave machines participate in networking by identifying the two-dimensional code or analyzing networking information, so that the remote slave machines are connected to the host machine.
The smart phone, the smart television and the smart sound box can be used as hosts. The host downloads the K song accompaniment and the video picture or the background sound from the cloud, mixes the human voice sent by each slave computer with the accompaniment or the background sound, eliminates the howling and plays the voice locally. Meanwhile, the host sends the audio stream and the video stream after sound mixing to the local slave machine through short-distance communication modes such as WiFi. And the host sends the audio stream and the video stream after the audio mixing to the remote slave machine through the WiFi or the data communication network.
The mobile device is used as a slave, and can be used as a microphone to pick up voice information of a user and send the voice information to the host. On the other hand, the slave machine also receives a downstream data stream from the host machine, such as pictures, lyrics of K songs or accompaniment when a conference speaks.
The local microphone slave machine can be used as a microphone, picks up singing voice or speaking voice of a user and sends the singing voice or the speaking voice to the host machine through short-distance communication modes such as WiFi. On the other hand, a downstream data stream from the host computer is received, so that the pictures and lyrics of the K songs are displayed.
The remote microphone slave machine can be used as a microphone, picks up the voice information of the user and sends the voice information to the host machine through a data communication network. On the other hand, the downstream data stream from the host is received, so that the audio frequency containing the accompaniment and the sound information of the user and other users is played, the picture and lyric of the K song are displayed, or the user of the K song is made to hear the voice of other users during the conversation and the meeting.
The master computer and the slave computer can be connected with various peripheral devices. Such as a speaker (wired or bluetooth) or a public address device for amplifying sound, or an earphone.
In order to ensure good user experience, the master needs to control the synchronization of audio and video playing of the master and each slave, and the master needs to determine the initial playing time of audio and video playing of each slave by determining the clock deviation between the master and each slave. Such as: the master machine determines a first starting playing time of the slave machine 1 for playing the multimedia data by determining a first clock deviation between the master machine and the slave machine 1, wherein the first starting playing time is used for representing the starting time of the slave machine for playing the multimedia data. The master machine determines a second starting playing time of the slave machine 2 for playing the multimedia data through a second clock deviation between the master machine and the slave machine 2, wherein the second starting playing time is used for representing the starting time of the slave machine 2 for playing the multimedia data. Therefore, the host can adjust the starting playing time of the slave 1 and the slave 2 for playing the multimedia data through the starting time of the host for playing the multimedia data, the first clock deviation and the second clock deviation, so that the host, the slave 1 and the slave 2 can synchronously play the multimedia data.
It should be noted that the clock skew may be determined by the slave. At this time, the master sends the starting time of playing the multimedia data to the slave 1 and the slave 2. The slave 1 determines a first clock deviation between the master and the slave 1, and determines a starting time for playing the multimedia data from the slave 1 according to the starting time for playing the multimedia data from the master and the first clock deviation. Similarly, the slave 2 determines a second clock offset between the master and the slave 2, and determines the starting time of playing the multimedia data from the slave 2 according to the starting time of playing the multimedia data from the master and the second clock offset.
In another possible case, the master determines the clock offset and sends its start play time and clock offset to the slave, which determines the start play time. The master machine determines a first clock deviation between the master machine and the slave machine 1; the master determines a second clock offset between the master and the slave 2; the master machine sends the starting time of the master machine for playing the multimedia data and the first clock deviation to the slave machine 1, and the slave machine 1 determines the starting time of the slave machine 1 for playing the multimedia data according to the starting time of the master machine for playing the multimedia data and the first clock deviation; the master machine sends the starting time of the master machine for playing the multimedia data and the second clock deviation to the slave machine 2, and the slave machine 2 determines the starting time of the slave machine 2 for playing the multimedia data according to the starting time of the master machine for playing the multimedia data and the second clock deviation.
Next, a description will be given of a software and hardware system structure of a multi-terminal multimedia data communication system provided in the present application. The applications of multi-machine playing, multi-machine karaoke, multi-machine conversation, multi-machine recording and the like can be realized through the software and hardware system structure of the multi-terminal multimedia data communication system provided by the application. The software and hardware system architecture of the multi-terminal collaborative play application system is unified, so that one specific device can be used as a host to initiate networking and can also be used as a slave to join the networking. The distinction is only whether a particular functional module is on or off, either as a master or as a slave.
Fig. 5 schematically illustrates a software and hardware system architecture 500 of a multi-terminal multimedia data communication system host, by way of example. The host system architecture 500 may include a multi-computer-slave-cooperating data receiving module 510, a multi-computer-cooperating downlink data receiving module 520, an audio decoding module 530, a howling resisting module 540, a sound mixing module 550, a sound effect processing module 551, a cooperating audio output control module 560, a multi-computer-cooperating data sending module 570, a bluetooth protocol stack 571, a usb interface 572, a recording algorithm module 573, a codec574, a display interface 575, a display 580, a cooperating APP581, a Modem interface 590, a wifi protocol stack/chip 591, a bluetooth chip/antenna 592, a TypeC digital interface 593, a 3.5mm headphone/TypeC analog interface 594, a headphone or speaker 595, and a mic array 596. The coordinated audio output control module 560 may include sample rate and bit width conversion 560A, volume control 560B, channel selection 560C, channel combination 560D, and the like. The cooperative APP may include a multi-player 581A, a multi-K-song 581B, a multi-phone call 581C, a multi-phone recording 581D, and the like.
It is understood that the exemplary architecture of the embodiments of the present application does not constitute a specific limitation to the hardware and software system architecture 500 of the multi-terminal multimedia data communication system host. In other embodiments of the present application, the multi-terminal multimedia data communication system host software and hardware system architecture 500 may include more or fewer modules or components than those shown. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Specifically, the functions among the modules are as follows:
the data receiving module 510 of the multi-machine cooperative slave machine is configured to receive the uplink data sent by each slave machine with a very low delay.
The downlink data receiving module 520 is configured to receive the karaoke accompaniment and the picture video from the cloud with a very low latency, and read the karaoke accompaniment and the picture video in the local storage.
The audio decoding module 530 is used to decode the audio data stream. And if the coding format is the sound channel coding format, outputting the PCM code stream of each sound channel. And if the audio object coding format is adopted, outputting the audio object information and the sound channel PCM code stream.
The anti-howling module 540 is configured to eliminate echoes mixed into the master MIC and the slave MIC, and avoid howling, and the anti-howling reference signal is provided by the audio decoding module.
The mixing module 550 is used for mixing the ascending human voice with the accompaniment sound.
The sound effect processing 551 module is used for rendering effects such as reverberation and 3D surround listening.
It should be noted that, the channel PCM code stream processed by the audio mixing module 550 may be directly transmitted to the cooperative audio output control module 560, and in addition, the user may select to perform further sound effect processing 551 on the channel PCM code stream after the audio mixing processing, so as to obtain a rendered channel PCM code stream and transmit the rendered channel PCM code stream to the cooperative audio output control module.
The cooperative audio output control module 560 has a plurality of inputs for receiving inputs from a plurality of modules such as: the multi-machine cooperates with the downlink data receiving module 520, the sound mixing module 550 and the effect processing module 551. Meanwhile, the internal low-delay sub-functional module is provided with the following sub-functional modules:
sample rate and bit width conversion 560A: converting the sampling rate to ensure that the sampling rate and the bit width meet the output requirement;
volume control 560B: volume for controlling output;
channel selection 560C: for selecting which channel to output;
channel merging 560D: the user combines the channels into one channel for playing to increase the volume.
The cooperative audio output control module 560 is also capable of inputting audio streams to the multi-computer cooperative data transmission module 570, the bluetooth protocol stack 571, the usb interface 572, the recording algorithm 573 and the Codec574. In addition, in order to maintain synchronization of media images and sounds, the time delay value is transmitted to the display interface 575 in cooperation with the audio output control module 570 so that synchronization of accompaniment and lyrics can be achieved by aligning the time delay.
The collaboration APP581 is used to display collaboration functions on a screen and receive user input. The user controls the modules of the multi-machine cooperative data transmission 570 and the cooperative audio output control 560 through the operation interface of the cooperative APP.
The multi-machine cooperative data sending module 570 is in butt joint with the Modem interface 590 and the WiFi protocol stack/chip 591, and can send the audio stream and the video stream to each slave machine with extremely low time delay.
In other embodiments, the USB interface 572 may interface with a bluetooth chip/antenna 593 through a bluetooth protocol stack 571, and interface with a TypeC (digital) 593 to send audio streams and video streams to the slaves.
The Codec574 is in butt joint with an earphone or a loudspeaker 595 and a 3.5mm earphone seat/type C (analog) 595, and is used for playing the PCM code stream after being mixed with the audio 550 or processed by the sound effect 551. In some embodiments, the host may also function as a microphone, such as a smartphone, and may also function as a microphone to receive the user's singing voice, where Codec574 interfaces with the MIC array for receiving user input received by the host microphone.
Fig. 6 is a schematic diagram illustrating an exemplary software and hardware system architecture 600 of a multi-terminal multimedia data communication system slave.
The multi-machine cooperative host data receiving module is used for receiving the media data stream sent by the host with extremely low time delay
The slave MIC610 is connected with a Codec620 (or other interface chip) and enters a multi-computer cooperative data sending module through cooperative audio output control so as to send the data to the host.
For convenience of understanding, the data interaction process of the host and the slave of the multi-terminal multimedia data communication system provided in this embodiment is described by taking the cooperation of karaoke as an example, where the host may be a smart television or a smart phone, the slave is a smart phone, and both the host and the slave may be connected to a peripheral speaker for amplifying the speaker, and the slave may also be connected to an earphone. Therefore, the collaborative karaoke can be divided into four modes: the system comprises a TV (host machine), a plurality of local slave machines, a TV (host machine), a plurality of local and remote slave machines, a smart phone (as a host machine and a microphone), a plurality of local and remote slave machines.
For example, fig. 7 shows a data interaction flow between a master and a slave when a smart Television (TV) is used as the master and a plurality of smart phones are used as the slaves.
Step 701: the slave machine selects songs, and transmits song information to the host machine through short-distance communication modes such as WiFi or Bluetooth.
Step 702: the multi-machine collaborative downlink data receiving module receives K song accompaniment and picture video from the cloud end with extremely low time delay, reads the K song accompaniment and the picture video in the local storage, sends an audio data stream to the audio decoding module, sends an audio data stream capable of being directly played to the collaborative audio output control module, and sends a video screen data stream to the display interface.
Step 703: the host sends the K song pictures and the lyrics to the slave through the multi-computer cooperative data sending module, and the slave acquires the lyrics, the video and other information of the song.
Step 704: the audio decoding module decodes the audio data stream, and the display interface sends the video data stream to the display for display. The accompaniment audio processed by the audio decoding module is used as a howling-resistant reference signal to be sent to the howling-resistant module and simultaneously sent to the sound mixing module and the sound effect processing module.
Step 705: the user sings, the slave MIC acquires the voice information of the user and transmits the singing voice to the multi-computer cooperative slave data receiving module.
Step 706: and the data receiving module of the multi-machine cooperative slave machine sends the MIC data stream of the slave machine voice to the host machine in short-distance communication modes such as WiFi or Bluetooth. Since the voice received from the MIC is mixed with tv accompaniment sounds, a howling prevention module is required to perform howling prevention processing.
Step 707: and performing anti-howling treatment for eliminating echo mixed into the MIC of the slave and avoiding howling.
Step 708: and sound mixing processing, namely obtaining clean human voice after anti-howling processing, and mixing the clean human voice and the accompanying sound through a sound mixing module. And sending the sound channel PCM code stream after sound mixing processing to a cooperative audio output control module.
Optionally, step 709: the user can render the mixed sound channel PCM code stream to generate reverberation, 3D surround listening feeling and the like.
Step 710: the cooperative audio output control module can perform sampling rate and bit width conversion, volume control, sound channel selection or sound channel combination and other processing on the sound channel PCM code stream after sound mixing processing or the sound channel PCM code after sound effect processing, and the processed sound channel PCM code stream is sent to the Codec. In addition, in order to keep the synchronization of the lyrics and the accompaniment music, the 'cooperative audio output control' sends a delay value to the 'display interface' to judge the relative delay of the lyrics and the music and adjust.
Step 711: the singing voice of the user is played through a receiver or a loudspeaker of the host machine.
For another example, fig. 8 shows an interaction flow between the host and the slave when the smart television is used as the host and the mobile device is used as the local slave and the remote slave.
Unlike fig. 7, when there is an off-site slave to join the song K,
step 801: the master and the slave can both select songs, taking the local slave selecting songs as an example.
Step 802: the host WiFi or the data communication network (2G/3G/4G/5G) sends the K song pictures and the lyrics to the slave computers at different places, and the slave computers at different places acquire the lyrics, the video and other information of the songs.
Step 803: and the multi-machine cooperation slave data receiving module sends the MIC data stream of the remote slave voice to the host through WiFi or a data communication network (4G/5G) to perform howling resistance treatment.
Step 804: and the multi-machine cooperative data sending module of the host machine sends the processed PCM code stream to the different-place slave machines.
Step 805: the remote slave telephone receiver or the loudspeaker plays the accompaniment sound transmitted from the host computer, and plays the singing sound of the local user and the remote user.
For example, as shown in fig. 9, the mobile device serves as both a master and a microphone, and when a plurality of mobile devices serve as local slaves, the interaction flow between the master and the slaves is shown.
The difference from fig. 7 is that, in this case, a handset is used as a host and as a microphone,
step 901: the host MIC receives voice information of the user.
Step 902: and sending the host MIC data stream to the howling resistant processing module through the host Codec.
Step 903: because the sound of the loudspeaker of the mobile phone is small, when the mobile phone is used as a host, the mobile phone can be connected with external equipment such as an external sound box, and the like, so that the sound of the host is enlarged and the KTV effect is created.
For example, as shown in fig. 10, the mobile device serves as both a master and a microphone, and when a plurality of mobile devices serve as a local slave and a remote slave, an interaction flow between the master and the slave can be obtained by combining fig. 8, 9 and 10.
Next, a UI operation interface and an operation flow of the local application of the multiple songs K are specifically described. For example, a smart television is taken as a master computer, and a plurality of smart phones are taken as slave computers. It should be noted that the number of the slaves may be multiple, and the present embodiment is described by taking two slaves as an example.
Fig. 11 shows a local application system for multiple songs, which includes a master smart tv, a microphone slave 1 and a microphone slave 2, where the two slaves are both smart phones, and the master and the two slaves are in the same lan.
Fig. 12A shows a Graphical User Interface (GUI) on the television side, which configures the interface 1201 for a network of a television. Fig. 12B shows a handset side graphical user interface that is a wireless lan configuration interface 1202 for a handset. As shown in fig. 12A and 12B, at this time, the master tv and the two local mobile phones and slaves are both connected to the same lan.
Illustratively, fig. 13 shows a GUI of a television, which may be referred to as a television desktop 1301, and when the television detects that a user controls a remote controller to select an operation of an icon 1302 of a karaoke Application (APP) on the television desktop 1301, the television may start the karaoke application and display another GUI as shown in fig. 14, which may be referred to as a recommendation interface 1401. The recommendation interface 1401 may include a song order station control 1402 thereon for entering a song order interface. When the television detects an operation of the user controlling the remote controller to click on the jukebox control 1402 on the recommendation interface 1401, a GUI, which may be referred to as a jukebox interface 1501, is displayed as shown in fig. 15. The song order station interface 1501 can include controls 1502 for entering a cell phone code scanning song order interface. When the television detects that the user remote controller clicks the control 1502 on the song-on-demand interface 1501, another GUI shown in fig. 16 is displayed, where the GUI may be referred to as a mobile code-scanning song-on-demand interface 1601, and the mobile code-scanning song-on-demand interface 1601 may include a song-on-demand two-dimensional code 1602. The song requesting two-dimensional code 1602 on the mobile phone code scanning song requesting interface 1601 is scanned and identified through the mobile phone K song software, song requesting can be carried out through the mobile phone K song software, and meanwhile, the mobile phone can be used as a microphone to receive sound information of a user. For a plurality of local mobile phones, the karaoke can be added by scanning the two-dimensional code 1602 for requesting the karaoke, so that a KTV effect can be constructed by the television and the mobile phones.
The operation process of adding karaoke to the slave of the mobile phone will be briefly described.
Illustratively, fig. 17 (a) shows a GUI of a mobile phone, which is a desktop 1701 of the mobile phone. At the moment, the same piece of karaoke software is installed on the mobile phone end and the television end. When the mobile phone detects that the user clicks an icon 1702 of the karaoke application on the desktop 1701, the karaoke application can be started, fig. 20 (b) shows a start interface 1703 of the karaoke application, after the karaoke application is successfully opened, another GUI shown in fig. 17 (c) is displayed, the GUI may include a home interface 1704, and recommendation information is displayed on the home interface 1704. A menu bar 1705 may also be included on the GUI, with a control 1706 for entering the My interface in the menu bar 1705. When the handset detects an operation of the user clicking the control 1706 for entering the my interface, another GUI, which may be referred to as a my interface 1801, is displayed as shown in (a) of fig. 18. The GUI may include a control 1802. When the mobile phone detects an operation of clicking the control 1802 by the user, another GUI as shown in (b) of fig. 18 is displayed, and the GUI may include a control 1803 for scanning the two-dimensional code. Thereafter, if the mobile phone detects that the user clicks the control 1803, an interface 1804 for scanning the two-dimensional code may be displayed as shown in (c) of fig. 18. As shown in (c) of fig. 18, the interface 1804 for scanning the two-dimensional code at this time is being aligned with the song-on-tv two-dimensional code. After the two-dimensional code is successfully identified, a GUI1901 shown in (a) in fig. 19 is displayed, the GUI includes a song-requesting station interface 1902, the GUI further includes a menu bar 1903, the menu bar 1903 includes a song-requesting station controller 1904 that can be used for requesting a song, a controller 1905 that is used for a user to sing, a controller 1906 that is used for displaying a song that has been requested, and at this time, the song-requesting station controller 1904 that is used for requesting a song is lighted. The song-ordering station interface 1902 includes a disconnection control 1908, and a user can click the disconnection control 1908 to disconnect the connection between the mobile phone and the television. The song requesting station interface 1902 may further include a control 1909 for requesting a song to sing, and may further include a control 1910 for adding a song to a list of songs that have been requested. The song order interface 1902 further includes a search bar 1907, the search bar 1907 can perform song search by inputting the name of a song and the name of an author by a user, as shown in fig. 19 (a), for example, when the user selects a song with the name of "song 123", versions of the song appear under the search bar 1907, and for example, the user clicks the control 1909 to sing the selected song. At this time, the host television may cache the song using the cloud or locally, and after the caching is successful, the host television displays the song "song 123" selected by the user as shown in (b) in fig. 19.
At the moment, the user can click a controller control used for singing of the user on the mobile phone, and enter a TV controller interface to sing. As shown in fig. 20 (a), when the mobile phone detects that the user clicks the controller control for the user to sing, a GUI as shown in fig. 20 (b) is displayed, where the GUI includes a microphone control 2001 which can be used for singing, a control 2002 for starting an original song, a control 2003 for pausing, a control 2004 for cutting a song, a control 2005 for singing, a further control 2006 for selecting, and a control 2007 for sending an expressive bullet screen. The user may open the microphone control 2001 by a click operation, a slide operation, or the like. As shown in (c) of fig. 20, in this case, for the GUI for the user to sing, the microphone of the mobile phone can receive the singing voice of the user, and the singing voice of the user is played through the speaker at the television end.
In another embodiment, after the mobile phone detects that the user clicks the controller control for the user to sing, the mobile phone enters the TV controller interface to display the GUI shown in fig. 21, and at this time, the TV console interface simultaneously displays the video picture of the song and the lyrics, which is convenient for the user to use. The GUI comprises a control 2101 used for opening and closing the bullet screen, when the control 2101 is opened, if a user clicks the expression bullet screen, the expression bullet screen can be displayed on a television end video picture and a mobile phone end video picture, when the control 2102 is closed, if the user clicks the expression bullet screen, the expression bullet screen can be displayed on the television end video screen picture, and the mobile phone end does not display the expression bullet screen. Similarly, the GUI may also include a control 2102 for starting an original song, a control 2103 for pausing, a microphone control 2104 for singing, a control 2105 for cutting a song, a control 2106 for singing again, more controls 2107 for selecting, and a control 2108 for sending an emoticon.
Illustratively, as shown in fig. 22 (a), when the mobile phone detects that the user clicks the control for sending the expression bullet screen, the expression bullet screen is displayed on the television host, as shown in fig. 22 (b). For example, the expression bullet screen may be slid out from the top, bottom, left and right of the television screen, or faded in and out, which is not limited herein.
Illustratively, as shown in fig. 23 (a), when the mobile phone detects that the user clicks a control for selecting more, a GUI as shown in fig. 23 (b) is displayed, and the GUI may include a code scanning control 2301, a control 2302 for sharing, and a control 2303 for editing the bullet screen. For example, when the mobile phone detects that the user clicks the operator of the control 2302 for sharing, the control shown in (c) in fig. 23 is displayed, and the K song audio can be shared to other third-party applications.
A controlling part 2303 for editing barrage can realize opening and closing the expression barrage, supports the user to edit the characters simultaneously, sends the characters barrage.
Next, a UI operation interface and an operation flow of the multi-machine K-song remote application are briefly described. For example, a smart television is taken as a master, and a plurality of smart phones are taken as slaves for example. It should be noted that the number of the slaves may be multiple, and the present embodiment is described by taking two slaves as an example.
Fig. 24 shows a remote application system for multiple songs, which includes a main smart tv, a local microphone slave 1, a remote microphone slave 2, and a remote microphone slave 3, all of which are smart phones. In this case, the master and the local slave 1 are in the same wireless lan, and the remote slave 2 and the remote slave 3 are not in the same location as the master.
For the local slave machine to join the karaoke through the method of scanning the two-dimensional code or other short-distance communication modes such as the NFC in the local application system, the embodiment focuses on introducing the operation flow of joining the karaoke by the remote slave machine. For example, as shown in fig. 25, the code scanning song ordering interface at this time includes a sharing control 2501 in addition to the song ordering two-dimensional code, and the sharing control can send the network organizing information of the K song to the slave machines in different places to realize the K song in different places. For example, when the television detects that the user controls the remote control to select the sharing control on the television desktop, a GUI as shown in fig. 26 is displayed, and at this time, a buddy list 2601 of the user 1 is displayed on the GUI. To illustrate details of the buddy list 2601, the buddy list portion is enlarged and shown in fig. 27, and as can be seen from (a) of fig. 27, the buddy list includes a control 2701 for closing sharing, a control 2702 for searching for buddies, a multi-selection control 2703 for selecting a plurality of buddies, a buddy list of user 1 for singing, and a control 2704 for viewing up and down. Illustratively, when the television detects that the user has controlled the remote control to select multi-selection control 2703, a GUI is displayed as shown in fig. 27 (b), where control 2705 for selection appears in front of the friend's avatar, and the GUI also includes control 2706 for completing the selection and control 2707 for canceling the multi-selection. Illustratively, the user sends a karaoke invitation to the offsite users 2 and 3.
Illustratively, after the user 1 finishes sending the song invitation, the users 2 and 3 receive the song invitation message and display the message as shown in (a) in fig. 28, and after the mobile phone detects that the user clicks the song invitation message, the message as shown in (b) in fig. 28 is displayed, and the two-dimensional code for scanning the code and ordering the song on the host is sent to the slave computers in different places. Illustratively, as shown in fig. 29, the user 2 and the user 3 may join the karaoke by long pressing or otherwise recognizing the two-dimensional code. It should be noted that the host computer shares the karaoke invitation sent to the slave computer, and the invitation may be a two-dimensional code or a link, and may be sent to the user in another place in a message form in the karaoke application, or in a short message form.
As another example, the mobile phone may also be used as a host to initiate the karaoke collaboration. For example, as shown in fig. 30 (a), when the mobile phone detects that the user clicks an icon of the K song application on the desktop, the K song application may be started, and fig. 30 (b) shows a start interface of the K song application, and after the K song application is successfully opened, the mobile phone detects that the user clicks a song-selecting control, and displays a GUI as shown in fig. 30 (c), where the GUI includes a control 3001 for multi-user collaborative K song. When the mobile phone detects that the user clicks the multi-user K song control 3001, a GUI shown in FIG. 31 is displayed, the mobile phone enters a song requesting station interface, and a control 3101 for scanning codes and requesting songs for requesting songs is contained in a menu bar below the GUI song requesting station interface. When the mobile phone detects that the user clicks the control 3101, a GUI as shown in (b) of fig. 31 is displayed, where the GUI includes a song-requesting two-dimensional code and a sharing control. For local users, karaoke can be added by scanning the two-dimensional code. For a user in a different location, the host may share the two-dimensional code with the sharing control.
According to the multi-terminal multimedia data communication method, the multi-terminal multimedia data stream low-delay synchronous multi-point transmission among the plurality of terminal devices can be achieved through wireless connection (such as Wi-Fi, bluetooth and a mobile communication system), the plurality of terminals can cooperate to establish a new application, new experience is provided, convenient interconnection and intercommunication among the devices can be achieved, multimedia content can be shared, and accordingly the advantages and functions of the devices are fully utilized. The multi-terminal multimedia data communication method provided by the embodiment of the application can be used for application scenes such as multi-machine playing, multi-machine karaoke, multi-machine communication, multi-machine recording and the like. The multimedia applications based on multi-terminal collaborative playing, such as multi-machine playing, multi-machine karaoke, multi-machine conversation, multi-machine recording, and the like, comprise two modes of local application and remote application.
Fig. 32 is a schematic diagram of a multimedia data communication system of a terminal according to an embodiment of the present application, and referring to fig. 32, the system includes: a master electronic device 11, a first slave electronic device 12, and a second slave electronic device 13, wherein:
the master electronic equipment is used for establishing connection with the first slave electronic equipment and the second slave electronic equipment respectively;
the first slave electronic equipment is used for receiving a first playing instruction, and the first electronic equipment is also used for sending the first playing instruction to the master electronic equipment;
the master electronic device is also used for responding to the first playing instruction, playing the first multimedia data, and simultaneously sending at least one part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the first multimedia data;
the first slave electronic equipment is also used for receiving the first voice, and meanwhile, the second slave electronic equipment is used for receiving the second voice;
the master electronic device is further used for receiving the first voice sent by the first slave electronic device and receiving the second voice sent by the second slave electronic device, and the master electronic device is further used for mixing the first voice, the second voice and the first multimedia data to generate and play second multimedia data.
The present application also provides a storage medium comprising: a readable storage medium and a computer program for implementing the multi-terminal multimedia data communication method provided by any of the foregoing embodiments.
All or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The aforementioned program may be stored in a readable memory. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned memory (storage medium) includes: read-only memory (ROM), RAM, flash memory, hard disk, solid state disk, magnetic tape (magnetic tape), floppy disk (flexible disk), optical disk (optical disk), and any combination thereof
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A multi-terminal multimedia data communication method applied to a plurality of electronic devices including a master electronic device, a first slave electronic device and a second slave electronic device, the method comprising:
the master electronic device is connected with the first slave electronic device and the second slave electronic device respectively;
the first slave electronic equipment receives a first playing instruction, and the first slave electronic equipment sends the first playing instruction to the master electronic equipment;
the master electronic device responds to the first playing instruction, plays first multimedia data, and simultaneously sends at least one part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the first multimedia data;
the first slave electronic equipment receives a first voice of a person, and meanwhile, the second slave electronic equipment receives a second voice of the person;
the master electronic device receives the first voice sent by the first slave electronic device and the second voice sent by the second slave electronic device, and the master electronic device mixes the first voice, the second voice and the first multimedia data to generate and play second multimedia data;
the method for synchronously playing the first multimedia data by the master electronic device, the first slave electronic device and the second slave electronic device specifically includes:
the master electronic device determining a first clock offset between the master electronic device and the first slave electronic device;
the master electronic device determining a second clock offset between the master electronic device and the second slave electronic device;
the master electronic device determines a first starting playing time of the first slave electronic device for playing the first multimedia data according to the first clock deviation; the first starting playing time is used for representing the starting time of the first slave electronic equipment for playing the first multimedia data;
the main electronic equipment determines a second starting playing time of the second slave electronic equipment for playing the first multimedia data according to the second clock deviation; the second starting playing time is used for representing the starting time of the second slave electronic equipment for playing the first multimedia data;
alternatively, the first and second electrodes may be,
the master electronic device sends the starting time of the master electronic device for playing the first multimedia data to the first slave electronic device and the second slave electronic device; the first slave electronic device determines a first clock offset between the master electronic device and the first slave electronic device, and determines a starting time of the first slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the first clock offset; the second slave electronic device determines a second clock offset between the master electronic device and the second slave electronic device, and determines a starting time of the second slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the second clock offset;
alternatively, the first and second electrodes may be,
the master electronic device determining a first clock offset between the master electronic device and the first slave electronic device; the master electronic device determining a second clock offset between the master electronic device and the second slave electronic device; the master electronic device sends the starting time of the master electronic device for playing the first multimedia data and the first clock deviation to the first slave electronic device, and the first slave electronic device determines the starting time of the first slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the first clock deviation; the master electronic device sends the starting time of the master electronic device for playing the first multimedia data and the second clock deviation to the second slave electronic device, and the second slave electronic device determines the starting time of the second slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the second clock deviation;
the main electronic device mixes the first voice, the second voice and the first multimedia data to generate second multimedia data and plays the second multimedia data, and the method further comprises the following steps:
the master electronic device sends the second multimedia data to the second slave electronic device;
the master electronic device and the second slave electronic device synchronously play the second multimedia data;
the main electronic device establishes connection with the first slave electronic device and the second slave electronic device respectively, and the connection includes:
the master electronic device and the first slave electronic device are located in the same wireless local area network, the master electronic device displays a WiFi connection identifier, and the first slave electronic device establishes connection with the master electronic device by identifying the WiFi connection identifier;
the master electronic device and the second slave electronic device are not in the same wireless local area network, the master electronic device sends networking information to the second slave electronic device, and the second slave electronic device establishes connection with the master electronic device by analyzing the networking information.
2. The method of claim 1, wherein at least a portion of the first multimedia data comprises audio, video, or lyrics of the first multimedia data.
3. The method of claim 1, further comprising:
the main electronic equipment receives a second playing instruction input by a user;
the master electronic device responds to the second playing instruction, plays third multimedia data, and simultaneously sends at least one part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the third multimedia data;
the first slave electronic equipment receives a first voice of a person, and meanwhile, the second slave electronic equipment receives a second voice of the person;
the master electronic device receives the first voice sent by the first slave electronic device and receives the second voice sent by the second slave electronic device, and the master electronic device mixes the first voice, the second voice and the third multimedia data to generate and play fourth multimedia data.
4. The method of claim 3, further comprising:
the first slave electronic device receives a first voice, the second slave electronic device receives a second voice, and the master electronic device receives a third voice;
the master electronic device receives the first voice sent by the first slave electronic device and receives the second voice sent by the second slave electronic device, and the master electronic device mixes the first voice, the second voice, the third voice and the fourth multimedia data to generate and play a fifth multimedia file.
5. A multi-terminal multimedia data communication system, said system comprising a master electronic device, a first slave electronic device and a second slave electronic device, said system comprising:
the master electronic device is used for establishing connection with the first slave electronic device and the second slave electronic device respectively;
the first slave electronic device is used for receiving a first playing instruction, and the first slave electronic device is also used for sending the first playing instruction to the master electronic device;
the master electronic device is further used for responding to the first playing instruction, playing first multimedia data, and simultaneously sending at least one part of the first multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device and the first slave electronic device and the second slave electronic device synchronously play the first multimedia data;
the first slave electronic equipment is also used for receiving a first voice, and meanwhile, the second slave electronic equipment is used for receiving a second voice;
the master electronic device is further configured to receive the first voice sent by the first slave electronic device and receive the second voice sent by the second slave electronic device, and the master electronic device is further configured to mix the first voice, the second voice and the first multimedia data to generate second multimedia data and play the second multimedia data;
the method for synchronously playing the first multimedia data by the master electronic device, the first slave electronic device and the second slave electronic device specifically includes:
the master electronic device is to determine a first clock offset between the master electronic device and the first slave electronic device;
the master electronic device is further to determine a second clock offset between the master electronic device and the second slave electronic device;
the master electronic device is further configured to determine a first starting playing time at which the first slave electronic device plays the first multimedia data according to the first clock offset; the first starting playing time is used for representing the starting time of the first slave electronic equipment for playing the first multimedia data;
the master electronic device is further configured to determine a second starting playing time at which the second slave electronic device plays the first multimedia data according to the second clock offset; the second starting playing time is used for representing the starting time of the second slave electronic equipment for playing the first multimedia data;
alternatively, the first and second electrodes may be,
the master electronic device is used for sending the starting time of the master electronic device for playing the first multimedia data to the first slave electronic device and the second slave electronic device; the first slave electronic device is used for determining a first clock offset between the master electronic device and the first slave electronic device, and determining a starting time of the first slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the first clock offset; the second slave electronic device is used for determining a second clock deviation between the master electronic device and the second slave electronic device, and determining a starting time of the second slave electronic device for playing the first multimedia data according to the starting time of the master electronic device for playing the first multimedia data and the second clock deviation;
alternatively, the first and second electrodes may be,
the master electronic device is to determine a first clock offset between the master electronic device and the first slave electronic device; the master electronic device is further to determine a second clock offset between the master electronic device and the second slave electronic device; the master electronic device is further configured to send a start time of the master electronic device playing the first multimedia data and the first clock offset to the first slave electronic device, and the first slave electronic device is configured to determine a start time of the first slave electronic device playing the first multimedia data according to the start time of the master electronic device playing the first multimedia data and the first clock offset; the master electronic device is further configured to send a start time of the master electronic device playing the first multimedia data and the second clock offset to the second slave electronic device, and the second slave electronic device is configured to determine a start time of the second slave electronic device playing the first multimedia data according to the start time of the master electronic device playing the first multimedia data and the second clock offset;
after the master electronic device mixes the first voice, the second voice and the first multimedia data to generate second multimedia data and plays the second multimedia data, the system further includes:
the master electronic device is further configured to send the second multimedia data to the second slave electronic device;
the master electronic device and the second slave electronic device synchronously play the second multimedia data;
the master electronic device is configured to establish connection with the first slave electronic device and the second slave electronic device, respectively, and includes:
the master electronic device and the first slave electronic device are located in the same wireless local area network, the master electronic device is used for displaying a WiFi connection identifier, and the first slave electronic device is used for establishing connection with the master electronic device by identifying the WiFi connection identifier;
the master electronic device and the second slave electronic device are not in the same wireless local area network, the master electronic device is used for sending networking information to the second slave electronic device, and the second slave electronic device is used for establishing connection with the master electronic device by analyzing the networking information.
6. The system of claim 5, wherein the at least a portion of the first multimedia data comprises audio, video, or lyrics of the first multimedia data.
7. The system of claim 5, further comprising:
the main electronic equipment is used for receiving a second playing instruction input by a user;
the master electronic device is further used for responding to the second playing instruction, playing third multimedia data, and simultaneously sending at least one part of the third multimedia data to the first slave electronic device and the second slave electronic device, so that the master electronic device, the first slave electronic device and the second slave electronic device synchronously play the third multimedia data;
the first slave electronic equipment is used for receiving a first voice, and meanwhile, the second slave electronic equipment is used for receiving a second voice;
the master electronic device is further configured to receive the first voice sent by the first slave electronic device and receive the second voice sent by the second slave electronic device, and the master electronic device is further configured to mix the first voice, the second voice and the third multimedia data to generate fourth multimedia data and play the fourth multimedia data.
8. The system of claim 7, further comprising:
the first slave electronic equipment is also used for receiving the first voice, meanwhile, the second slave electronic equipment is also used for receiving the second voice, and meanwhile, the master electronic equipment is also used for receiving a third voice;
the master electronic device is further configured to receive the first voice sent by the first slave electronic device and the second voice sent by the second slave electronic device, and the master electronic device is further configured to mix the first voice, the second voice, the third voice and the fourth multimedia data to generate a fifth multimedia file and play the fifth multimedia file.
9. A computer-readable storage medium, characterized in that it stores computer instructions which, when run on a computer, cause the computer to perform the method of multi-terminal multimedia data communication according to any of claims 1-4.
CN201910533498.0A 2019-06-19 2019-06-19 Multi-terminal multimedia data communication method and system Active CN112118062B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910533498.0A CN112118062B (en) 2019-06-19 2019-06-19 Multi-terminal multimedia data communication method and system
JP2021566505A JP7416519B2 (en) 2019-06-19 2020-06-18 Multi-terminal multimedia data communication method and system
PCT/CN2020/096679 WO2020253754A1 (en) 2019-06-19 2020-06-18 Multi-terminal multimedia data communication method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910533498.0A CN112118062B (en) 2019-06-19 2019-06-19 Multi-terminal multimedia data communication method and system

Publications (2)

Publication Number Publication Date
CN112118062A CN112118062A (en) 2020-12-22
CN112118062B true CN112118062B (en) 2022-12-30

Family

ID=73795677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910533498.0A Active CN112118062B (en) 2019-06-19 2019-06-19 Multi-terminal multimedia data communication method and system

Country Status (3)

Country Link
JP (1) JP7416519B2 (en)
CN (1) CN112118062B (en)
WO (1) WO2020253754A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250825B2 (en) * 2018-05-21 2022-02-15 Smule, Inc. Audiovisual collaboration system and method with seed/join mechanic
US11120782B1 (en) * 2020-04-20 2021-09-14 Mixed In Key Llc System, method, and non-transitory computer-readable storage medium for collaborating on a musical composition over a communication network
CN112927666B (en) * 2021-01-26 2023-11-28 北京达佳互联信息技术有限公司 Audio processing method, device, electronic equipment and storage medium
CN117041302B (en) * 2023-10-08 2024-01-30 深圳墨影科技有限公司 Operation system and method for multi-equipment collaborative planning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819306A (en) * 2018-12-29 2019-05-28 华为技术有限公司 Media file clipping method, electronic device and server

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4219234B2 (en) 2003-08-28 2009-02-04 独立行政法人産業技術総合研究所 Karaoke system, music performance communication device, and performance synchronization method
FR2864399B1 (en) * 2003-12-22 2006-05-05 Patrick Cichostepski METHOD FOR DIFFUSION OF SONGS AND SYSTEM FOR THE PRACTICE OF REMOTE KARAOKE, IN PARTICULAR BY TELEPHONE
JP2013231951A (en) 2012-04-06 2013-11-14 Yamaha Corp Acoustic data processing device and acoustic data communication system
CN103198848B (en) * 2013-01-31 2016-02-10 广东欧珀移动通信有限公司 Synchronous broadcast method and system
KR20150118972A (en) * 2013-03-22 2015-10-23 야마하 가부시키가이샤 Audio data processing device and audio data communications system
CN105808710A (en) * 2016-03-05 2016-07-27 上海斐讯数据通信技术有限公司 Remote karaoke terminal, remote karaoke system and remote karaoke method
CN106057222B (en) * 2016-05-20 2020-10-27 联想(北京)有限公司 Multimedia file playing method and electronic equipment
CN107396137B (en) * 2017-07-14 2020-06-30 腾讯音乐娱乐(深圳)有限公司 Online interaction method, device and system
CN107665703A (en) * 2017-09-11 2018-02-06 上海与德科技有限公司 The audio synthetic method and system and remote server of a kind of multi-user
CN113504851A (en) * 2018-11-14 2021-10-15 华为技术有限公司 Method for playing multimedia data and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819306A (en) * 2018-12-29 2019-05-28 华为技术有限公司 Media file clipping method, electronic device and server

Also Published As

Publication number Publication date
JP2022537012A (en) 2022-08-23
WO2020253754A1 (en) 2020-12-24
CN112118062A (en) 2020-12-22
JP7416519B2 (en) 2024-01-17

Similar Documents

Publication Publication Date Title
CN111345010B (en) Multimedia content synchronization method, electronic equipment and storage medium
CN110381197B (en) Method, device and system for processing audio data in many-to-one screen projection
CN111316598B (en) Multi-screen interaction method and equipment
CN112118062B (en) Multi-terminal multimedia data communication method and system
CN113497909B (en) Equipment interaction method and electronic equipment
CN111666119A (en) UI component display method and electronic equipment
CN111628916B (en) Method for cooperation of intelligent sound box and electronic equipment
CN110198362B (en) Method and system for adding intelligent household equipment into contact
CN113542839A (en) Screen projection method of electronic equipment and electronic equipment
CN113923230B (en) Data synchronization method, electronic device, and computer-readable storage medium
CN114173000B (en) Method, electronic equipment and system for replying message and storage medium
CN113961157B (en) Display interaction system, display method and equipment
CN113496426A (en) Service recommendation method, electronic device and system
CN114040242A (en) Screen projection method and electronic equipment
CN112543447A (en) Device discovery method based on address list, audio and video communication method and electronic device
CN114827581A (en) Synchronization delay measuring method, content synchronization method, terminal device, and storage medium
CN115426521A (en) Method, electronic device, medium, and program product for screen capture
CN114185503A (en) Multi-screen interaction system, method, device and medium
CN114356195A (en) File transmission method and related equipment
CN114173184A (en) Screen projection method and electronic equipment
CN114064160A (en) Application icon layout method and related device
CN115242994A (en) Video call system, method and device
CN115883893A (en) Cross-device flow control method and device for large-screen service
CN115529487A (en) Video sharing method, electronic device and storage medium
WO2022252980A1 (en) Method for screen sharing, related electronic device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210426

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Device Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant