CN111092898B - Message transmission method and related equipment - Google Patents

Message transmission method and related equipment Download PDF

Info

Publication number
CN111092898B
CN111092898B CN201911345024.XA CN201911345024A CN111092898B CN 111092898 B CN111092898 B CN 111092898B CN 201911345024 A CN201911345024 A CN 201911345024A CN 111092898 B CN111092898 B CN 111092898B
Authority
CN
China
Prior art keywords
message
packet
field
cloud server
media data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911345024.XA
Other languages
Chinese (zh)
Other versions
CN111092898A (en
Inventor
胡军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN201911345024.XA priority Critical patent/CN111092898B/en
Publication of CN111092898A publication Critical patent/CN111092898A/en
Application granted granted Critical
Publication of CN111092898B publication Critical patent/CN111092898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0876Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Power Engineering (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of communication, and discloses a message transmission method, electronic equipment and a communication system. The message transmission method disclosed by the application comprises the following steps: after receiving the sending instruction, the first device determines first media data, then generates a first message according to the first media data, and further sends the first message to the second device. Wherein the first media data includes first video data and first audio data. The first packet may include only a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data. It can be seen that the first device only includes necessary data in the message, so that the format of the message can be simplified, transmission resources can be saved, and transmission performance can be improved.

Description

Message transmission method and related equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a message transmission method and a related device.
Background
After the first device and the second device establish a communication connection, the first device may send a media data packet to the second device, and the second device plays media content corresponding to the media data packet in real time. The media data packet transmitted between the first device and the second device supports a real-time transport protocol (RTP).
The RTP is suitable for media data packet transmission in various implementation scenarios, for example, media data packet transmission in projection, live broadcast, on-demand, video call, and other scenarios, and correspondingly, a packet header of the RTP includes fields corresponding to the various implementation scenarios. Based on this, for example, the first device transmits the media data packet to the second device in a projection manner, and then a header of the media data packet contains many fields that are not related to projection, which results in relatively redundant content of the media data packet, and thus not only occupies relatively more transmission resources, but also results in poor performance of real-time transmission.
Disclosure of Invention
The application provides a message transmission method and related equipment, which can solve the problem caused by content redundancy of the existing message.
In a first aspect, the present application provides a packet transmission method, including: the first equipment receives a sending instruction; the first device determining first media data, the first media data comprising first video data and first audio data; the first device generates a first message according to the first media data, wherein the first message comprises a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data and a field of the first video data, the function field indicates the purpose of data transmitted by the first message, the version number field indicates the version number of a transmission protocol supported by the first message, the sequence number field indicates the sequence of the first media data in all the media data according to the playing sequence, and the timestamp field indicates the sending time of the first message; and the first equipment sends the first message to the second equipment.
In this embodiment, the first device is, for example, a sending device, and the second device is, for example, a receiving device. The first device may determine the first audio data and the first video data after receiving an instruction to send a message to the second device. And matching the audio content corresponding to the first audio data with the video content corresponding to the first video data. And then, the first equipment generates a first message. In some embodiments, the first packet may include only a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data, without other fields. Therefore, by adopting the implementation mode, the first device can customize the format of the first message, so that the first message only contains necessary fields. Therefore, the data message can be simplified, the data quantity contained in the message can be reduced, the transmission resource occupied by the message can be further reduced, and the transmission delay can be greatly shortened. In addition, the first audio data and the first video data are packaged together by the first equipment for transmission, and the phenomenon that the sound and the picture are not synchronous at the second equipment end can be avoided.
In a possible implementation manner, the first packet further includes a binding sequence number field and an equipment identifier field of the first equipment, where the binding sequence number corresponds to the identifier of the first equipment and the identifier of the second equipment. In some embodiments, the first device cannot directly send the first packet to the second device, and the first packet needs to be forwarded to the second device through the cloud server. In this embodiment, the cloud server needs to know that the target device is the second device. Based on this, in this embodiment, the first packet may further include a binding sequence number field and an equipment identifier field of the first equipment, so that the cloud server can determine the second equipment according to the binding sequence number and the equipment identifier of the first equipment. Therefore, in this embodiment, the first packet still only includes necessary fields, so that transmission resources occupied by the packet can be reduced, and transmission delay can be greatly shortened.
In a possible implementation manner, after the first device sends the first packet to the second device, the method further includes: the first device receives a first response message from the second device, wherein the first response message is a response message sent by the second device after receiving the first message, the first response message comprises a function field, a version number field, a sequence number field and a timestamp field, and the timestamp field indicates the time when the second device receives the first message; the first device calculates the transmission delay of the first message according to the timestamp field contained in the first response message and the timestamp field contained in the first message; according to the transmission delay of the first message, the first device generates second media data, wherein if the transmission delay of the first message is greater than or equal to a first preset threshold and smaller than a second preset threshold, the second media data comprises second audio data and second video data, and the resolution of a video corresponding to the second video data is lower than that of a video corresponding to the first video data in the first message; or, if the transmission delay of the first packet is greater than or equal to the second preset threshold, the second media data includes second audio data, and the second media data does not include video data; the first equipment generates a second message according to the second media data; and the first equipment sends the second message to the second equipment.
In practical implementation, in some embodiments, after the first device sends the first packet to the second device, the second device feeds back response information for receiving the first packet to the first device. For example, in a projected implementation scenario, the first device may receive the first response message. The function field indicates a function of data carried by the first response packet, for example, a response (response) of the first packet. The version number field indicates a version number of a transmission protocol supported by the first response packet, for example, if the transmission protocol supported by the first response packet is the same as the transmission protocol supported by the first packet and has the same version, the version number field is the same as the version number field in the first packet. The sequence number field indicates a sequence number corresponding to a packet received by the second device, and in this embodiment, the sequence number field is the same as the sequence number field in the first packet. The timestamp field indicates the time at which the first message was received by the second device. As can be seen, the first response packet also only contains necessary fields, so that transmission resources occupied by the packet can be reduced, and transmission delay can be shortened. In addition, the transmission delay of the message can indicate the transmission rate of the current network. Based on this, after receiving the first response packet, the first device may calculate a transmission delay of the first packet, determine the second media data according to the transmission delay, and further generate the second packet according to the second media data. The second message may be a message sent by the first device to the second device after the first message. By adopting the implementation mode, the first equipment can determine the transmission rate of the current network in time, and then adaptively adjust the data volume of the media data contained in the message to be transmitted, so that the transmission delay can be avoided, and further, the real-time playing effect of the second equipment can be optimal.
In a possible implementation manner, the first response packet further includes the binding sequence number field and the device identification field of the second device. Correspondingly, in a scenario where the cloud server forwards the first response packet to the first device, the cloud server needs to know that the target device is the first device. Based on this, in this embodiment, the first response packet may further include a binding sequence number field and a device identification field of the second device. Therefore, in this embodiment, the first response packet still only includes necessary fields, so that transmission resources occupied by the packet can be reduced, and transmission delay can be greatly shortened.
In a possible implementation manner, before the first device receives the sending instruction, the method further includes: the first equipment sends a login request to a cloud server; after receiving a login response from the cloud server, the first device sending a first binding request to the cloud server; after receiving a first binding response from the cloud server, the first device sends a user name corresponding to the second device to the cloud server; after receiving response information from the cloud server, the first device receives a verification code input by a user, and sends an acquisition request to the cloud server, wherein the acquisition request is used for indicating to acquire a device identifier corresponding to the user name, the device identifier corresponding to the user name comprises a device identifier of the second device, the acquisition request comprises the verification code, and the verification code is generated by the cloud server; after receiving the device identifier corresponding to the user name from the cloud server, the first device sends a second binding request to the cloud server, where the first binding request includes the device identifier of the second device; the first device receives the binding sequence number from the cloud server.
In some embodiments, the first device and the second device transmit the message based on the transport mechanism of P2P. In this embodiment, the first device and the second device do not have a correspondence relationship. Based on this, before the first device sends the first packet to the second device, the first device needs to establish a binding relationship with the second device. Establishing a "binding relationship" refers to enabling mutual invocation between two devices. In the embodiment of the application, the first device and the second device determine the binding relationship between the two devices through the device identifier, so that the operation process is simpler, and the storage resource can be saved.
In a possible implementation manner, after receiving the binding serial number from the cloud server, the first device further includes: the first device establishes an end-to-end P2P connection with the second device. In some embodiments, when the distance between the first device and the second device is within a certain range, the first device and the second device may establish a P2P connection, thereby facilitating the mutual transmission of messages between the first device and the second device.
In a possible implementation manner, the sending, by the first device, the first packet to the second device includes: the first device sends the first message to the second device through the P2P channel; or the first device forwards the first message to the second device via the cloud server. In some embodiments, the first device is directly connected to the second device through the communication channel. The first device may send the first message to the second device over the communication channel. For example, in a projection or video call implementation scenario, a first device establishes a P2P connection with a second device, for example. In this embodiment, the first device may send the first packet to the second device through the P2P connection channel. In other embodiments, the first device and the second device are not directly connected, and the first device may forward the first packet to the second device through the cloud server. Therefore, the implementation process of the embodiment of the application is flexible and the applicability is wide.
In a second aspect, the present application provides a packet transmission method, applied to a communication system, where the communication system includes a first device and a second device, and the method includes: the first device receives a sending instruction, determines first media data, generates a first message according to the first media data, and sends the first message to the second device, wherein the first media data comprises first video data and first audio data, the first message comprises a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data and a field of the first video data, the function field indicates the use of data transmitted by the first message, the version number field indicates the version number of a transmission protocol supported by the first message, the sequence number field indicates the sequence of the first media data in all the media data according to a playing sequence, and the timestamp field indicates the sending time of the first message; and the second equipment receives the first message. And generating a first response packet in response to the first packet, and sending the first response packet to the first device, where the first response packet is a response packet sent by the second device after receiving the first packet, the first response packet includes a function field, a version number field, a sequence number field, and a timestamp field, and the timestamp field indicates a time when the second device receives the first packet.
In the embodiment of the application, the first device and the second device can establish wireless connection to form a communication system. Further, the first device may send a media data packet to the second device. Correspondingly, the second device can play the media corresponding to the media data message. In this embodiment, the first message sent by the first device to the second device only includes necessary fields, so that the data message can be simplified, the data amount included in the message can be reduced, the transmission resources occupied by the message can be reduced, and the transmission delay can be greatly shortened. In addition, the first device packages the first audio data and the first video data together for transmission, and the phenomenon that the sound and the picture are not synchronous can be avoided at the second device.
In a possible implementation manner, after the second device receives the first packet, the method further includes: the second device responds to the first message to generate a first response message, and sends the first response message to the first device, wherein the first response message is sent after the second device receives the first message, the first response message comprises a function field, a version number field, a sequence number field and a timestamp field, and the timestamp field indicates the time when the second device receives the first message; the first device calculates the transmission delay of the first message according to the timestamp field contained in the first response message and the timestamp field contained in the first message, generates second media data according to the transmission delay of the first message, generates a second message according to the second media data, and sends the second message to the second device; if the transmission delay of the first message is greater than or equal to a first preset threshold and less than a second preset threshold, the second media data comprise second audio data and second video data, and the resolution of the video corresponding to the second video data is lower than the resolution of the video corresponding to the first video data in the first message; or, if the transmission delay of the first packet is greater than or equal to the second preset threshold, the second media data includes second audio data, and the second media data does not include video data.
In this embodiment, the second device may feed back response information for receiving the first packet to the first device. The first response message also only contains necessary fields, so that transmission resources occupied by the message can be reduced, and transmission delay is shortened. In addition, the first device may calculate the transmission delay of the first packet according to the first response packet, so as to determine the transmission rate of the current network in time. Then, the first device determines the second media data according to the transmission delay so as to adaptively adjust the data volume of the media data contained in the message to be transmitted. And then, the first equipment generates a second message according to the second media data, so that transmission delay can be avoided, and the real-time playing effect of the second equipment is optimal.
In a possible implementation manner, the communication system further includes a cloud server, and before the first device receives the transmission instruction, the communication system further includes: the first device sends a login request to the cloud server; the cloud server sends a login response to the first device; the first device sends a user name corresponding to the second device to the cloud server; the cloud server sends an authentication code to a mobile phone number or a mailbox corresponding to the user name; the first device receives the verification code input by a user and sends an acquisition request to the cloud server, wherein the acquisition request is used for indicating to acquire a device identifier corresponding to the user name, and the device identifier corresponding to the user name comprises a device identifier of the second device; the cloud server sends the device identification corresponding to the user name to the first device; the first device sends a second binding request to the cloud server, wherein the first binding request contains a device identifier of the second device; and the cloud server responds to the second binding request and sends a binding serial number to the first equipment and the second equipment.
In this embodiment, the cloud server maintains a correspondence between each login user name and the device identifier. Based on this, the first device (or the second device) may obtain the device identification that is desired to be bound through the user name. And then, the cloud server allocates a binding serial number for the device establishing the binding relationship so as to maintain the binding relationship between the first device and the second device. Compared with the existing mode that the third party calling numbers need to be respectively distributed to the first equipment and the second equipment, the method and the device for distributing the third party calling numbers are simpler in operation process and can save storage resources.
In a third aspect, the present application provides an electronic device having functions to implement the method of the first device described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions. In one possible design, the electronic device includes a processor and a transceiver in the structure, and the processor is configured to process the first device to perform the corresponding functions in the above method. The transceiver is used for realizing the receiving and sending of the message. The electronic device may also include a memory, coupled to the processor, that retains program instructions and data necessary for the electronic device.
In a fourth aspect, the present application provides a communication system comprising a first device and a second device. The first apparatus is as described in the third aspect. The second device has a function of implementing the second device in the first aspect, the second aspect, various possible implementations of the first aspect, and various possible implementations of the second aspect. In one possible design, the communication system further includes a cloud server, and the cloud server has a function of implementing the second device in the first aspect, the second aspect, various possible implementations of the first aspect, and various possible implementations of the second aspect. And will not be described in detail herein.
In a fifth aspect, the present application provides a computer storage medium having instructions stored therein, where the instructions, when executed on a computer, cause the computer to perform some or all of the steps of the message transmission method in the first aspect, the second aspect, various possible implementations of the first aspect, and various possible implementations of the second aspect.
In a sixth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute some or all of the steps of the first aspect, the second aspect, various possible implementations of the first aspect, and various possible implementations of the second aspect.
In order to solve the problem of the existing message transmission method, in the embodiment of the present application, after receiving a sending instruction, a first device serving as a sending device determines first media data, then generates a first message according to the first media data, and further sends the first message to a second device. Wherein the first media data includes first video data and first audio data. The first packet may include only a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data. Therefore, in the embodiment of the present application, the first device may customize the format of the first message, so that the first message only includes necessary fields. Therefore, the data message can be simplified, the data quantity contained in the message can be reduced, the transmission resource occupied by the message can be further reduced, and the transmission delay can be greatly shortened. In addition, the first audio data and the first video data are packaged together by the first equipment for transmission, and the phenomenon that the sound and the picture are not synchronous can be avoided at the second equipment, so that the use experience of a user can be improved.
Drawings
FIG. 1 is a schematic diagram of an exemplary implementation scenario provided herein;
fig. 2A is an exemplary architecture diagram of a first device 100 provided herein;
fig. 2B is an exemplary architecture diagram of a second device 200 provided herein;
FIG. 3A-1 is a schematic diagram of a first exemplary user interface for entering a projection setup scenario provided herein;
3A-2 are diagrams of a second exemplary user interface into a projection setup scenario provided herein;
3A-3 are diagrams of a third exemplary user interface into a projection settings scenario provided herein;
3A-4 are diagrams of a fourth exemplary user interface into a projection setup scenario provided herein;
FIG. 3B-1 is a schematic diagram of a first exemplary user interface in a scenario of binding devices provided herein;
FIG. 3B-2 is a second exemplary user interface diagram in the context of a bound device as provided herein;
3B-3 are diagrams of a third exemplary user interface in the context of a bound device provided herein;
3B-4 are fourth exemplary user interface diagrams in the context of a bound device provided herein;
FIG. 3C-1 is a schematic diagram of a first exemplary user interface in a triggered projection scenario provided herein;
FIG. 3C-2 is a schematic diagram of a second exemplary user interface in a triggered projection scenario provided herein;
3C-3 are diagrams of a third exemplary user interface in a triggered projection scenario provided herein;
FIG. 3D is a schematic diagram of an exemplary user interface in an end projection scenario provided herein;
fig. 4 is a flowchart of a method of transmitting a message 100 according to the present application;
fig. 5A is a schematic diagram of a data frame structure of a first packet provided in the present application;
fig. 5B is a schematic diagram of a data frame structure of a first response packet provided in the present application;
fig. 6 is a signaling interaction diagram of a method 1001 for establishing a binding relationship according to the present application;
fig. 7 is a flowchart of a method of message transmission 200 according to the present application;
fig. 8A is an exemplary structural diagram of an electronic device 80 provided herein;
fig. 8B is an exemplary structural diagram of an electronic device 81 provided herein;
fig. 9A is an exemplary structural diagram of a communication system 90 provided herein;
fig. 9B is an exemplary structural diagram of a communication system 91 provided in the present application.
Detailed Description
The technical solution of the present application will be clearly described below with reference to the accompanying drawings in the present application.
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that although the terms first, second, etc. may be used in the following embodiments to describe a class of objects, the objects should not be limited to these terms. These terms are only used to distinguish between particular objects of that class of objects. For example, the following embodiments may use the terms first, second, etc. to describe the message, but the message should not be limited to these terms. These terms are only used to distinguish between different media data packets. The following embodiments may adopt the terms first, second, etc. to describe other class objects in the same way, and are not described herein again.
The following describes an implementation scenario of the present application.
As shown in fig. 1, the present application relates to a first device and a second device, and after the first device establishes a connection with the second device, the first device may be configured to send a media data packet to the second device. This specification refers to a media data message as a message. In some embodiments, the second device may be configured to play the media content corresponding to the received message in real time. In other embodiments, the second device may be configured to play the media content corresponding to the received message in a first time period after the first device sends the message. The first period of time is the length of time that the delay is allowed. The media data in the message includes, for example, video data and audio data. The media content includes, for example, video corresponding to video data in the message and audio corresponding to audio data in the message.
In some embodiments, the first device to which the present application relates may be a media server. In this application, the media server refers to a device for acquiring and maintaining audio and video resources, and the media server is further configured to control and manage the audio and video resources sent to the media client. In other embodiments, the first device referred to in this application may be an electronic device entity including a data transmission function, for example, the first device may be a device including but not limited to a vehicle
Figure BDA0002333098740000061
Figure BDA0002333098740000062
Or other operating system, such as a smart phone, a tablet computer, a camera device (camera), a monitoring device, an in-vehicle device, and other electronic device entities.
The second device related to the present application may be a display device including an audio playing function, a video playing function, and an image displaying function, for example, an electronic device such as a display, a smart television, a smart phone, a tablet computer, an AR (Augmented Reality) device, and an in-vehicle device. In some embodiments, the second device may act as a media client. In this embodiment, the media client is configured to request the media server to acquire the audio and video resource, and then play the audio and video corresponding to the audio and video resource acquired from the media server.
The present application relates to a User Interface (UI) for providing human-computer interaction, at least one of a first device and a second device comprising a UI. The UI is a media interface for interaction and information exchange between an application or operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. The user interface of the application program is a source code written by a specific computer language such as java, extensible markup language (XML), and the like, and the interface source code is analyzed and rendered on the terminal device, and finally presented as content that can be identified by the user, such as controls such as pictures, characters, buttons, and the like. Controls, also called widgets, are basic elements of user interfaces, and typically have a toolbar (toolbar), menu bar (menu bar), text box (text box), button (button), scroll bar (scrollbar), picture, and text. The properties and contents of the controls in the interface are defined by tags or nodes, such as XML defining the controls contained by the interface by nodes < Textview >, < ImgView >, < VideoView >, and the like. A node corresponds to a control or attribute in the interface, and the node is rendered as user-viewable content after parsing and rendering.
A commonly used presentation form of the user interface is a Graphical User Interface (GUI), which refers to a user interface related to operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The application scenes can include projection, video call, live broadcast, on-demand broadcast and the like. Wherein, projection and video call are realized based on a peer-to-peer (P2P) transmission mechanism. In the scenario of projection and video call, in conjunction with fig. 1, the first device may be used as a transmitting-end electronic device. Both live broadcast and on-demand broadcast are realized based on a transmission mechanism from a server to a client. In connection with fig. 1, in a live and on-demand scenario, the first device may act as a server.
Illustratively, the projection means that the first device transmits the audio and video played by the first device to the second device for playing. And the device for playing the audio and video can only have the second device by adopting a projection implementation mode. The video call means that the first device and the second device both acquire audio and video in real time and synchronize the acquired audio and video to the opposite-end device, so that the first device and the second device synchronously play the video acquired by the two devices and play the audio of the opposite end. The live broadcasting means that the first device serves as a server and sends audio and video acquired by the first device to the client for real-time playing. In this live implementation scenario, the second device may, for example, act as a client. In a live broadcast implementation scene, when a plurality of clients exist, the plurality of clients play audio and video which are obtained by the server in real time. The on-demand refers to that the first device, as a server, for example, may pre-store a plurality of audio and video resources, and when receiving an audio and video resource requested to be acquired by a client, the first device sends a corresponding audio and video resource to the client. In an on-demand implementation scenario, the second device may act as a client, for example. In the implementation scenario of on-demand, when there are multiple clients, the audio and video played by different clients may be different.
In conjunction with the implementation scenario illustrated in fig. 1, a message transmitted between the first device and the second device supports a real-time transport protocol (RTP) and a real-time transport control protocol (RTCP). Among them, RTP and RTCP can be applied to various implementation scenarios such as live broadcast, on demand, video call, and projection. Based on this, the RTP and RTCP defined headers contain fields suitable for the various implementation scenarios described above, such as a synchronization source identifier field, an appointment source identifier field, a payload type field, etc. Based on this, if the implementation scenario illustrated in fig. 1 is any of the above implementation scenarios, the message carries fields that are not related to the implementation scenario, which results in relatively redundant message contents, and thus not only occupies relatively more transmission resources, but also causes poor performance of real-time transmission, and reduces user experience.
The application provides a message transmission method and related equipment, wherein the message related in the application only contains necessary fields, so that the format of the message can be simplified, transmission resources can be saved, and the transmission performance can be improved.
First, an exemplary first device 100 provided in the following embodiments of the present application will be described.
Fig. 2A shows a schematic structural diagram of the first device 100.
The first device 100 may include a processor 110, a memory 120, a Universal Serial Bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, a communication module 150, an audio module 160, a speaker 160A, a microphone 160B, an earphone interface 160C, a camera 170, and the like.
It is to be understood that the illustrated structure of the present application does not constitute a specific limitation of the first apparatus 100. In other embodiments of the present application, the first device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphic Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, a neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors. In some embodiments, the first device 100 may also include one or more processors 110.
Wherein the controller may be a neural center and a command center of the first device 100. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of the detection instruction and the like.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 160 via an I2S bus to enable communication between the processor 110 and the audio module 160. In some embodiments, the audio module 160 may pass audio signals to the communication module 150 through an I2S interface.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 160 and communication module 150 may be coupled by a PCM bus interface. In some embodiments, audio module 160 may also pass audio signals to communication module 150 through a PCM interface. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the communication module 150. For example: in some embodiments, the audio module 160 may communicate the audio signal to the communication module 150 through a UART interface.
A MIPI interface may be used to connect processor 110 with peripheral devices such as camera 170. The MIPI interface includes a Camera Serial Interface (CSI) and the like. In some embodiments, processor 110 and camera 170 communicate over a CSI interface, enabling the capture functionality of first device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 170, the communication module 150, the audio module 160, and the like. The GPIO interface may also be configured as an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the first device 100, and may also be used to transmit data between the first device 100 and a peripheral device. And the method can also be used for connecting a headset and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interfacing relationship between the modules illustrated in the present application is only for illustrative purposes and does not constitute a structural limitation of the first device 100. In other embodiments, the first device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the first device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the memory 120, the camera 170, and the communication module 150, among other things. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The communication function of the first device 100 may be implemented by the communication module 150, a modem processor, a baseband processor, and the like.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through audio devices (not limited to speaker 160A, microphone 160B, etc.). In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The communication module 150 may provide a solution for communication applied on the first device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), optical fiber, and the like. The communication module 150 may be one or more devices integrating at least one communication processing module. The communication module 150 receives electromagnetic waves via an antenna, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves via the antenna to radiate it.
In some embodiments, the solution to communication provided by the communication module 150 may enable the first device 100 to communicate with the second device 200, such that the first device 100 may communicate with the second device 200. In other embodiments, the solution of communication provided by the communication module 150 may also enable the first device to communicate with a device in the network (e.g., a cloud server) to enable the first device 100 to forward the message to the second device 200 via the cloud server.
The first device 100 may implement a photographing function through the ISP, the camera 170, the video codec, the GPU, the application processor, and the like.
The ISP is used to process the data fed back by the camera 170. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 170.
The camera 170 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the first device 100 may include 1 or N cameras 170, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals.
Video codecs are used to compress or decompress digital video. The first device 100 may support one or more video codecs. In this way, the first device 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) -1, MPEG-2, MPEG-3, MPEG-4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU may implement applications such as intelligent recognition of the first device 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
Memory 120 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above instructions stored in the memory 120, so as to enable the first device 100 to execute the message transmission method provided in some embodiments of the present application, and various functional applications and data processing. The memory 120 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system and the like. The storage data area may store data (such as video data, audio data, and time stamp data, etc.) to be transmitted by the first device 100. Further, the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The first device 100 can implement audio functions through the audio module 160, the speaker 160A, the receiver 160B, the microphone 160B, the earphone interface 160C, and the application processor. Such as music playing, recording, etc.
The audio module 160 is used to convert digital audio information into analog audio signal output and also used to convert analog audio input into digital audio signal. The audio module 160 may also be used to encode and decode audio signals. In this embodiment, the audio module 160 may be configured to convert the analog audio input collected by the first device 100 into a digital audio signal, and then encode the digital audio signal. In some embodiments, the audio module 160 may be disposed in the processor 110, or some functional modules of the audio module 160 may be disposed in the processor 110.
The speaker 160A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The first device 100 can listen to music through the speaker 160A.
The microphone 160B, also referred to as a "microphone," is used to collect and convert audio signals into electrical signals. The first device 100 may be provided with at least one microphone 160B. In other embodiments, the first device 100 may be provided with two microphones 160B to implement a noise reduction function in addition to capturing audio signals. In other embodiments, three, four or more microphones 160B may be further disposed on the first device 100 to achieve audio signal acquisition, noise reduction, audio source identification, directional recording, and the like.
The earphone interface 160C is used to connect a wired earphone. The headset interface 160C may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The first device 100 exemplarily shown in fig. 2A may capture video data through the camera 170 and audio data through the microphone 160B. The first device 100 may also generate a first message through the processor 110, then transmit the first message to the second device 200 through the communication module 150, and so on.
Fig. 2B shows an exemplary architectural diagram of the second device 200.
The second device 200 may include a processor 210, a memory 220, a communication module 230, an audio module 240, a speaker 240A, a headphone interface 240B, a display 250, a power supply 260, and the like.
It is to be understood that the illustrated structure of the present application does not constitute a specific limitation of the second device 200. In other embodiments of the present application, the second device 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In this embodiment, the processor 210 includes hardware and software and functions of the hardware and software similar to those of the processor 110. The present application is not described in detail herein. The processor 210 may be configured to invoke the display screen 250 to display video content and the processor 210 may be further configured to invoke the speaker 240A or the headphone interface 240B to play audio content. This enables the second device 200 to implement the function of playing the media content.
In some embodiments, the processor 210 may be further configured to generate a response message, where the response message includes information feedback of the received message from the second device 200, for example, a timestamp of the message received by the second device 200.
The solution for the communication provided by the communication module 230 is similar to the communication module 150 and will not be described in detail here. The solution of communication provided by the communication module 230 may enable the second device 200 to communicate with the first device 100 so that the second device 200 may communicate with the first device 100. In other embodiments, the solution to communication provided by the communication module 230 may cause the second appliance 200 to communicate with an appliance in the network (e.g., a cloud server) to cause the second appliance 200 to forward the message to the first appliance 100 via the cloud server.
The audio module 240 functions similarly to the audio module 150 and will not be described in detail here. In this embodiment, the audio module 240 is configured to decode the audio data packet in the message received by the second device 200 to obtain an audio electrical signal, and then send the audio electrical signal to the speaker 240A or the earphone interface 240B.
The speaker 240A functions similarly to the speaker 160A and will not be described in detail here, and in this embodiment, the speaker 240A may be used to convert the audio electrical signal decoded by the audio module 240 into an audio signal. The function of the headphone interface 240B is similar to the headphone interface 160C and will not be described in detail here.
Memory 220 may be used to store one or more computer programs, including instructions. The processor 210 may cause the second device 200 to perform the message transmission method provided in some embodiments of the present application, and the like, by executing the above-mentioned instructions stored in the memory 220. For example, the memory 220 may be used for buffering messages and the like received by the second device 200 from the first device 100.
The display screen 240 is used to display controls, information, images, videos, and the like. The display screen 240 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like.
The power supply 250 may be used to power the processor 210, memory 220, display screen 240, and the like.
The second device 200 exemplarily shown in fig. 2B can receive the first message through the communication module 230 and the antenna 240 and transmit the second message. The second device 200 may generate a second message via the processor 210. The second device 200 may display video or images via the display screen 240, may play audio via the speaker 240A or the headphone interface 240B, and so on.
It should be noted that, in some embodiments, before the first device transmits the message to the second device, the first device and the second device need to establish a binding relationship, and the second device needs to be determined as the target receiving device. For example, in a projected implementation scenario and a video call implementation scenario, a binding relationship needs to be established between a first device and a second device, and the second device needs to determine a target receiving device. Based on this, the embodiment also relates to a cloud server. The cloud server is used for establishing and maintaining the binding relationship between the devices. Of course, in the implementation scenarios of live broadcast and on-demand, the first device and the second device may not be bound.
The establishment of the binding relationship refers to enabling the two devices to call each other. For example, in the case of a projection implementation scenario, a device that has a "binding relationship" with a first device refers to a device that is allowed to receive a projection of the first device. A device that has a "binding relationship" with a second device refers to a device that is allowed to project on the second device.
The cloud server referred to in the present application is also referred to as a cloud host, and is a supercomputer obtained by virtualizing a large number of server entities. The cloud server can store a large amount of data and programs, and resources required to be called for executing the corresponding programs. In some embodiments, the data stored by the cloud server may include an Internet Protocol (IP) address of the first device 100, a unique identifier of the first device 100, an IP address of the second device 200, a unique identifier of the second device 200, binding serial numbers of the first device 100 and the second device 200, and the like. In some embodiments, when a program stored in the cloud server runs, the cloud server may perform an operation of binding the first device 100 and the second device 200 and an operation of assigning a binding serial number. In other embodiments, when the program stored in the cloud server runs, the cloud server may further perform receiving and sending of information and messages, and the like. The program carried by the cloud server may be a source code written by a specific computer programming language, such as hypertext markup language (HTML), hypertext preprocessor (PHP), java script (JavaScript, JS), and the like. The cloud server may store the data and the program in a distributed manner in the plurality of server entities, and the cloud server may call the resource execution program of any server entity in the plurality of server entities. The cloud server may allocate and call the resources of the large number of server entities through an Application Programming Interface (API). The binding sequence number corresponds to the identity of the first device 100 and the identity of the second device 200, indicating that the first device 100 and the second device 200 have a binding relationship. The unique identification of the device may include, for example, a Media Access Control (MAC) address, an International Mobile Equipment Identity (IMEI) number, a device number (device identity), an International Mobile Subscriber Identity (IMSI), a serial number (serial number), etc.
It can be understood that the cloud servers corresponding to different implementation scenarios are different, for example, in the implementation scenario of projection, the cloud server is a projection server; for another example, in an implementation scenario of a video call, the cloud server is a server corresponding to an APP that performs the video call, for example, a wechat server.
The following describes an exemplary human-computer interaction process related to the present application in conjunction with an interface diagram of a UI.
In some implementation scenarios, when the first device serves as a media server and the second device serves as a media client, the user does not need to operate the first device and the second device to establish a binding relationship. When a user wants to watch the audio and video through the second device, the user can log in the corresponding APP through the user name and the password on the second device, and then the audio and video to be watched is played through the APP.
Illustratively, taking a live broadcast implementation scenario as an example, the first device serves as, for example, an H live broadcast server, and is capturing audio and video of a scenario in which the first device is located. The user can enter a corresponding H live broadcast APP on the second device, then can input a pre-registered user name and a pre-registered password to log in the H live broadcast APP, and accesses the H live broadcast server of the first device end through the H live broadcast APP. And then, playing the audio and video acquired by the first equipment in real time on the interface of the second equipment in real time. The display effect of the second device in this embodiment will not be described in detail here.
It should be noted that, in the on-demand implementation scenario, the operation process of the user is similar to the operation in the live operation scenario, and is not described here again.
In other implementation scenarios, when the first device and the second device transmit the message based on the P2P transmission mechanism, the user needs to operate the first device and the second device to establish a binding relationship. Illustratively, the implementation scenario of the present embodiment is, for example, that a mobile phone projects on a smart television, the first device is, for example, a smart phone M, and the second device is, for example, a smart television N. The user needs to control the smart phone M and the smart television N to establish a binding relationship. Then, the user can select the smart television N as a target device for receiving the projection from the devices having the binding relationship with the smart phone M. Thereafter, the smartphone M may project the video and audio to the smart tv N.
It should be noted that, in practical applications, the smartphone M may establish a binding relationship with multiple display devices, as shown in table 1. Similarly, the smart television N may establish a binding relationship with a plurality of projectable devices, as shown in table 2. Furthermore, before the video and audio of the smart phone M are projected onto the smart television N, the user can determine the smart television N through the smart phone M.
TABLE 1
Figure BDA0002333098740000131
For example, in the binding relationship illustrated in table 1, the smartphone M establishes a binding relationship with, for example, the display a, the smart television N, and the display B. Based on this, the user can for example trigger the smartphone M to project on the display a, the smart tv N and the display B. In some embodiments, the user may trigger smartphone M to project on any of display a, smart tv N, and display B. In other embodiments, the user may trigger the smartphone M to project on at least two of the display a, the smart tv N, and the display B simultaneously.
TABLE 2
Figure BDA0002333098740000132
For example, in the binding relationship illustrated in table 2, the smart television N establishes a binding relationship with the smart phone M, the smart phone S, and the tablet computer X, for example. Based on this, the user can trigger any one of the smart phone M, the smart phone S, and the tablet computer X to project on the smart television N, for example.
The man-machine interaction process of the application is introduced according to three stages of establishing a binding relationship, determining target equipment and projecting.
Establishing a binding relationship
By way of example, the following describes a man-machine interaction operation involved in establishing a binding relationship, taking the smart television N as an example.
In some embodiments, as shown in fig. 3A-1, the user can control the smart tv N to enter a "setup" interface through a remote controller. The "setting" interface as shown in fig. 3A-2, the user can see a setting item list, which may include a plurality of setting items such as "basic setting", "network setting", "time setting", "display setting", "application management", and "system setting", for example. The user can click on the "system settings" illustrated in fig. 3A-2, and see the specific setting items contained in the "system settings" on the right side of the screen. As shown in fig. 3A-3, specific setting items of "system setting" include, for example, "project to this apparatus". The user may click "project onto this device" to enter the login interface illustrated in fig. 3A-4, then enter the username in the username entry control illustrated in fig. 3A-4, enter the password in the password entry control, and click the "login" button to enter the main interface of the projection service illustrated in fig. 3B-1. In this embodiment, the user name input by the user is, for example, a first user name, and the password input by the user is, for example, a first password.
It should be noted that, in the embodiment of the present application, a user may log in to the projection service on at least two devices using the same set of user name and password. When a user logs in projection service on at least two devices by using the same group of user name and password, the user name and the device identifications of the at least two devices establish a corresponding relation, and the at least two devices automatically establish a binding relation based on the same user name. Based on this, if the user also uses the first user name and the first password to log in the projection service at the smart phone M, the smart phone M and the smart television N automatically establish a binding relationship.
If the user logs in the projection service by using the second user name and the second password at the M-side of the smart phone, the user may click the button "add new device" illustrated in fig. 3B-1 to enter the interface "add new device".
As shown in FIG. 3B-2, the interface for "Add New device" includes an input box for "enter username for New device". The user may enter a second username in the input box and then click the "OK" button.
Further, the user may see the verification interface illustrated in FIGS. 3B-3. The verification interface comprises a typeface of 'inputting a verification code received by the mobile phone number 131 … … 6060', a verification code input box and a countdown control, wherein the countdown control starts counting down from 60 seconds, and the number of seconds in the control starts to be reduced by 1 every second from 59. Thereafter, the user may enter the verification code received by the handset 131 … … 6060 into the verification code input box and click the "ok" button to enter the device list interface. The cell phone number 131 … … 6060 is only an example of the present application and does not limit the present application. In addition, the phone number 131 … … 6060 may be a number corresponding to the smartphone M, or may be a number corresponding to another phone, which is not limited herein.
It is noted that the second username can be the cell phone number 131 … … 6060, a mailbox address, or a combination of letters and numbers. When the second user name is a mailbox address or a combination of letters and numbers, the second user name can be in a corresponding relationship with the mobile phone number 131 … … 6060. In other embodiments, when the second user name is a mobile phone number or a combination of letters and numbers, the second user name may be associated with a mailbox address. Correspondingly, the mailbox receives the verification code. The smart tv N-gram interface may present, for example, "enter a passcode received by mailbox abc @163. com".
As shown in fig. 3B-4, the user can see, for example, that "the device corresponding to the second username in the device list interface includes: "and at least one device identification, and" the device corresponding to the second username comprises: the left side of "for example comprises a" all select "button, and the left side of each of the at least one device identification comprises a radio button. Wherein the at least one device identifier includes a device identifier of the smartphone M. In some embodiments, the user may click the "select all" button, and then click the "add" button to establish a binding relationship between the smart television N and all devices corresponding to the second username. In other embodiments, the user may click the radio button corresponding to the device identifier of the smart phone M, and then click the "add" button to establish the binding relationship between the smart phone M and the smart television N.
It is to be understood that fig. 3A-1 through 3B-4 are merely schematic depictions which do not limit the scope of the present application. In other embodiments, the user may also establish the binding relationship between the smart phone M and the smart television N by "projecting" the APP. In this embodiment, a user may enter a "projection" APP at the N-terminal of the smart television, and then input a first username and a first password. Furthermore, the interaction process of the user with the smart television N is similar to that of 3B-1 to 3B-4, and is not described in detail here. In other embodiments, the user may also establish a binding relationship between the smart phone M and the smart television N by operating the smart phone M. The interaction process between the user and the smartphone M is similar to that of fig. 3A-1 to 3B-4 and will not be described in detail here.
In addition, the display effects presented in FIGS. 3A-1 through 3B-4 are not limiting to the present application. In other embodiments, the display effect presented by the smart television N may be different according to different systems carried by the smart television N, and the information content in the notification dialog box may be other. And will not be described in detail herein. It should be understood that although the user operation steps are different and the interface display effect is different, any binding operation embodiment of the system based on the same technical concept should fall into the protection scope of the present application.
Determining a target device
In some embodiments, the user may determine the target device to receive the projection at the smartphone M.
For example, as shown in fig. 3C-1, a user may open a camera APP at the M end of the smartphone to enter a shooting interface.
As shown in fig. 3C-2, the capture interface includes, for example, a capture mode option including, for example, options such as "slow motion," "video," "photo," "projection," and so forth. The user may click on the "project" option. Alternatively, a "project" option may be included in the settings interface, or pull-up menu, or pull-down menu, or other application interface.
Furthermore, as shown in fig. 3C-3, after the user clicks the "project" option, for example, a pull-down menu is displayed on the interface of the smartphone M, where the title of the pull-down menu may be, for example, "select the device to project" and list, for example, the device identifier having a binding relationship with the smartphone M under the title. The user may select, for example, the device identifier of the smart television N in the pull-down menu to use the smart television N as the target device for receiving the projection of the smartphone M.
It is understood that fig. 3C-1 to 3C-3 are only schematic illustrations and do not limit the embodiments of the present application. In other embodiments, the user may install the smart tv APP on the smart phone M, and then check the device identifier having the binding relationship with the smart phone M in the smart tv APP, and further select the device identifier of the smart tv N to use the smart tv N as a target device for receiving the projection of the smart phone M. And will not be described in detail herein.
It is to be understood that the above embodiments related to determining a target device are only schematic descriptions, and do not limit the embodiments of the present application. In other embodiments, the display effect presented by the smartphone M and the content of interaction with the user may be different according to the difference of the system, the equipment brand, the equipment model, and the like carried by the smartphone M. And will not be described in detail herein.
In the existing projection operation, a user needs to dial the smart television N at the smart phone M, and then triggers the smart television N to receive a call, so that the smart phone M and the smart television N can be triggered to establish a communication connection. Therefore, by adopting the embodiment of the application, the user does not need to execute the dialing operation and the receiving operation after dialing, and after the user clicks and triggers, the smart phone M and the smart television N can automatically establish communication connection, so that the operation convenience of the user can be improved, and the use experience of the user is improved.
Projection (projector)
Further, after determining that the smart television N is a device for receiving projection, the user can see the change of the data for establishing connection on the interfaces of the smart phone M and the smart television N. Then, the user can see the audio and video shot by the smartphone M on the interface of the smart television N, for example.
In addition, as shown in fig. 3D, an icon control of "end projection" is presented on the interface of the smartphone M, and after the projection is finished, the user may click the icon control of "end projection" to finish the projection. Correspondingly, an icon control for "ending projection" may also be included in the interface of the smart television N, and the user may also perform an operation of ending projection on the smart television N.
It is noted that fig. 3A-1 to 3D illustrate embodiments in a projection implementation scenario. In other embodiments, for example, in a video call scenario, the operation of the user to trigger the two devices to establish the binding relationship may be different. For example, the APP carrying the video call function is WeChat, and the first username is the WeChat username for the smart television N-terminal to log in. The second user name is a micro-credit user name logged in by the M end of the smart phone. The user can add two micro-credit user names as friends, so that the binding relationship between the smart phone M and the smart television N is established. Then, the user can initiate a video call to a friend corresponding to the second username on the WeChat APP at the M end of the smart phone. After a user accepts a video call invitation on a WeChat APP at the N end of the smart television, the smart phone M and the smart television N can achieve a video call function. And will not be described in detail herein.
It should be understood that the above human-computer interaction process is only an exemplary description, and does not limit the message transmission method related to the embodiment of the present application. In some other embodiments of the present application, the smart phone M and the smart television N may present other display effects in response to the user's operation. Accordingly, the process of setting by the user and the like may be different from the description of the above embodiment. It should be understood that although different operation embodiments may have different user operation steps and different interface display effects, the operation embodiments belong to the same technical concept and therefore all belong to the protection scope of the present application.
The following describes an exemplary message transmission method according to an embodiment of the present application from the perspective of a device.
Referring to fig. 4, fig. 4 is a flowchart of a method of a message transmission method 100 provided in this embodiment, where the message transmission method 100 (hereinafter referred to as the method 100) is applied to a first device, and the first device is the first device 100 illustrated in fig. 2A. The method 100 comprises the steps of:
in step S101, the first device receives a transmission instruction.
The sending instruction is used for indicating that the message is sent to the second equipment. In different implementation scenarios, the first device receives and sends the instruction in different ways.
In a projected implementation scenario, the first device may receive a send instruction input by a user. The transmission instruction contains a device identification of the second device. In this embodiment, the man-machine interaction process between the first device and the user is described in detail in the embodiments illustrated in fig. 3C-1 to 3C-3, and will not be described in detail here. In addition, in this embodiment, before step S101, the first device may establish a binding relationship with the second device. The embodiment of establishing the binding relationship between the first device and the second device is described in detail below, and is not described in detail here.
In the implementation scenario of the video call, in some embodiments, the first device may receive a sending instruction input by the user through the logged-in video call APP. The sending instruction comprises a user name of the video call APP logged in by the second equipment terminal. In other embodiments, the first device may receive a video call request sent by the video call APP in which the second device has logged in. The video call request includes an IP address of the second device. Further, the first device may receive an "accept" instruction entered by the user. The "accept" instruction may be the send instruction described in this embodiment. It should be understood that, in this embodiment, the video call APP executed by the first device and the video call APP executed by the first device are the same APP, for example, both are WeChat APPs. In addition, in this embodiment, before step S101, the first device may establish a binding relationship with the second device. And will not be described in detail herein.
In a live broadcast or on-demand implementation scenario, the first device serves as a media server, and may receive, after the second device successfully logs in, a request for obtaining a message sent by the second device. The request for obtaining the message may be used as the sending instruction described in this embodiment.
In step S102, the first device determines first media data.
The first media data is obtained by the first device through coding the media content. In some embodiments, the first device may capture first media content and then encode the captured first media content to obtain the first media data. In other embodiments, the first device may obtain a local storage or download the first media content from the network, and then encode the first media content to obtain the first media data.
In some embodiments, the first media data comprises first audio data. In other embodiments, the first media data includes first audio data and first video data. In this embodiment, the audio content corresponding to the first audio data matches the video content corresponding to the first video data. The first audio data may support a plurality of encoding formats, such as Advanced Audio Coding (AAC) format. The first video data may support a plurality of video encoding formats, such as h.264 or h.265 formats.
Illustratively, in a video call implementation scenario, for example, the first device may capture video via camera 170, while the first device may capture audio via microphone 160B, for example. Then, the first device encodes the played video to obtain first video data, and encodes the audio matched with the video to obtain first audio data. For another example, in an implementation scenario of projection, the first device may obtain a video being played by the first device, and then encode the video according to a video encoding format supported by the first device, to obtain the first video data. Meanwhile, the first device may obtain the audio being played by the first device, and then encode the audio according to the audio encoding format supported by the first device, so as to obtain the first audio data.
Step S103, the first device generates a first packet according to the first media data.
The first message supports a QUIC (quick UDP internet connection) protocol, and the QUIC protocol is a low-delay Internet transport layer protocol based on a User Datagram Protocol (UDP). For a detailed description of the QUIC protocol, reference may be made to the description of the UDP protocol, which is not described in detail herein.
Fig. 5A illustrates an exemplary data frame structure diagram of a first packet including a function (function) field, a version number (version) field, a sequence number (sequence number) field, a time stamp (time stamp) field, a first audio data (audio data) field, and a first video data (video data) field. Among them, for example: the function field may be 8 bytes in length. The version field may be 8 bytes in length. The sequence number field may be 32 bytes in length. the length of the time stamp field may be 32 bytes. The audio data field may be 8 bytes in length. The video data field may be 8 bytes in length.
The function field indicates the purpose (which may also be described as a function) of the data transmitted in the first message, e.g., the value 0x01 of the function field indicates that the data in the first message is for projection. The version field indicates a version number of a transport protocol supported by the first packet, and the current version number of the transport protocol is, for example, 0x 01. The sequence number field indicates the ordering of the first media data among the entire media data in the play order. Illustratively, the serial number may be implemented as a number, for example. Correspondingly, according to the playing sequence of the media stream, the sequence number of the last media data of the first media data is, for example, i, and the sequence number corresponding to the first media data may be i + 1. i is an integer of 1 or more. the time stamp field indicates the transmission time of the first packet in this embodiment. The audio data field indicates first audio data. The video data field indicates the first video data. It should be understood that if only audio data is contained in the first media data, the video data field may be empty.
It is understood that fig. 5A is only a schematic illustration, and does not limit the data frame of the first packet described in the present application. In other embodiments, the first packet may further include a bound number field, and the length of the bound number field may be 32 bytes. In some other embodiments, the first packet may further include an equipment identity field of the first device, and the length of the equipment identity field of the first device may be 32 bytes.
In a conventional media data packet transmission method, a first device generally transmits an audio data packet and a video data packet through two independent transmission channels, and a data amount included in the video data packet is greater than a data amount included in the audio data packet, so that a transmission speed of the audio data packet is faster than that of the video data packet, and therefore, the transmission method may cause that a sound and a picture are not synchronous when a second device end plays, and user experience is not good. Based on this, in the embodiment of the application, the first audio data and the first video data are both carried in the first message by the first device, so that the first audio data and the first video data can be packaged together and sent together, and further, when the content corresponding to the first media data is played by the second device, the phenomenon of audio-video asynchronism is avoided, and the use experience of a user is improved.
Step S104, the first device sends a first message to the second device.
In some embodiments, the first device is directly connected to the second device through the communication channel. The first device may send the first message to the second device over the communication channel. For example, in a live or on-demand implementation scenario, a network channel is included between the first device and the second device. In this embodiment, the first device may send the first packet to the second device through a downlink network channel. As another example, in a projection or video call implementation scenario, a first device establishes a P2P connection with a second device, for example. In this embodiment, the first device may send the first packet to the second device through the P2P connection channel. In other embodiments, the first device and the second device are not directly connected, and the first device may forward the first packet to the second device through the cloud server. For example, in a projected implementation scenario, the first device and the second device do not establish a P2P connection, and the first device may forward the first packet to the second device through the cloud server. For example, in this embodiment, the first message includes a second device identification field, and after receiving the first message from the first device, the cloud server may determine the second device according to the second device identification field in the first message, and then send the first message to the second device.
Therefore, by adopting the implementation mode, the first device can self-define the format of the message, so that the message only contains necessary fields, the data message can be simplified, the data volume contained in the message is reduced, the transmission resources occupied by the message can be further reduced, and the transmission delay can be greatly shortened. In addition, the first equipment encapsulates the audio data and the video data together for transmission, and the phenomenon that the sound and the picture are not synchronous can be avoided at the second equipment end.
In practical implementation, in some embodiments, after the first device sends the first packet to the second device, the second device may not feed back any data to the first device. For example, in an implementation scenario of on-demand, after receiving the data packet, the second device only plays the media content corresponding to the data packet. In this embodiment, all media data sent by the first device to the second device may all include audio data and video data, and is directly determined by the first device according to the media content. In other embodiments, after the first device sends the first packet to the second device, the second device feeds back response information for receiving the first packet to the first device. In this embodiment, the first device may adjust media data that a message to be sent should include according to a transmission rate of the network.
After step S104, the first device may receive a first response message. The first response message indicates that the second device receives response information of the first message. The first response message may also support the QUIC protocol.
Fig. 5B illustrates an exemplary data frame structure diagram of a first response packet, which includes a function (function) field, a version number (version) field, a sequence number (sequence number) field, and a time stamp (time stamp) field. The length of each field in the first response message is as described in the embodiment illustrated in fig. 5A. In this embodiment, the function field indicates a function of data carried in the first response packet, for example, a response of the first packet. The version field indicates a version number of a transmission protocol supported by the first response packet, for example, the transmission protocol supported by the first response packet is the same as the transmission protocol supported by the first packet, and the version is the same, and the field is the same as the version number field in fig. 5A. The sequence number field indicates a sequence number corresponding to the packet received by the second device, and the sequence number in the first response packet is, for example, i + 1. the time stamp field indicates the time when the second device receives the first packet.
It is understood that fig. 5B is only a schematic illustration, and the data frame of the first response packet described in the present application is not limited thereto. In other embodiments, the first response packet may further include a bind number field. Further, in some other embodiments, the first response message may further include a device identification field of the second device.
Similar to sending the first message, in some embodiments, the first device may receive the first response message directly from the second device. In other embodiments, the first device may receive the first response message from the cloud server. In this embodiment, for example, the first response message may include a first device identification field, and after receiving the first response message from the second device, the cloud server may determine the first device according to the first device identification field in the first response message, and then send the first response message to the first device.
Therefore, by adopting the implementation mode, the second device can customize the format of the response message, so that the data message only contains necessary fields, thereby simplifying the response message, further reducing transmission resources occupied by the response message and shortening transmission delay.
Further, the first device may calculate a transmission delay of the first packet according to a timestamp in the first response packet and a timestamp in the first packet, and further generate the second media data according to the transmission delay of the first packet. The second media data is the media data that the second message should contain. The second message is a subsequent message to be sent, which is determined by the first device according to the transmission delay of the first message, and may be a message sent by the first device to the second device after the first message.
Illustratively, when the transmission delay of the first packet is greater than or equal to a first preset threshold and less than a second preset threshold, the first device generates the second media data. The second media data includes second audio data and second video data. The resolution of the video corresponding to the second video data is lower than the resolution of the video corresponding to the first video data. And when the transmission delay of the first message is greater than or equal to a second preset threshold value, the first equipment generates second media data. The second media data contains second audio data and does not contain video data. The first preset threshold is, for example, 2 milliseconds (ms), and the second preset threshold is, for example, 5 ms.
The transmission delay of the message can indicate the transmission rate of the current network. Therefore, by adopting the embodiment of the application, the first device can determine the transmission rate of the current network in time, and then adaptively adjust the data volume of the media data contained in the message to be transmitted, so that the transmission delay can be avoided, and further, the real-time playing effect of the second device can be optimal.
Further, the first device may generate a second packet according to the second media data. And then sending the second message to the second equipment.
For example, the frame format of the second message may be as shown in fig. 5A. And will not be described in detail herein. The value of the sequence number field in the second message may be, for example, i + 2. The value of the time stamp field in the second message indicates the time at which the second message is sent. The audio data field indicates the second audio data. The video data field indicates the second video data.
It is noted that after sending the second message to the second device, the first device may also receive a second response message. And then, the first device calculates the transmission delay of the second message according to the timestamp contained in the second response message and the timestamp in the second message, and further generates third media data according to the transmission delay of the second message. And then, the first equipment generates a third message according to the third media data. And so on until the first device sends the entire media data to the second device, or until the first device disconnects from the second device. And will not be described in detail herein.
In summary, in the message transmission method disclosed in the present application, the related device only includes necessary data in the message, so that the format of the message can be simplified, transmission resources can be saved, and transmission performance can be improved. Furthermore, the message transmitted by the first device contains audio data and video data which are matched, so that synchronous playing of sound and pictures at the second device end can be ensured. In addition, in the application, the first device determines the media data to be transmitted according to the transmission delay, so that the transmission delay can be avoided, and the real-time playing effect of the second device is optimal.
As can be seen from the above description of the implementation scenario, in some embodiments, before step S101 of the method 100, the first device may establish a binding relationship with the second device through the cloud server.
As shown in fig. 6, a method 1001 for establishing a binding relationship between a first device and a second device (hereinafter referred to as method 1001) includes the following steps:
in step S11, the second device sends a first login request to the cloud server.
The first login request includes, for example, a first username, a first password, and a device identifier of the second device.
The first username and the first password are entered by a user. In this step, the human-computer interaction process between the user and the second device is described with reference to the embodiments illustrated in fig. 3A-1 to fig. 3A-4, and is not described herein again.
After receiving the first username and the first password, the cloud server verifies whether the first username and the first password are registered usernames and passwords, and if so, the verification is passed. Thereafter, the cloud server may transmit login response information to the second device. Accordingly, the second device presents, for example, the interface illustrated in FIG. 3B-1.
In step S12, the second device sends a first binding request to the cloud server.
The first binding request is used for requesting to bind the equipment corresponding to the other user names.
The first binding request is triggered to be sent by a user. The implementation process of triggering the second device by the user is described with reference to the embodiment illustrated in fig. 3B-1, and is not described here again.
In step S13, the cloud server sends an instruction to the second device to acquire a new device user name.
Accordingly, the second device side presents, for example, the interface illustrated in fig. 3B-2.
In step S14, the second device sends the second username to the cloud server.
The second user name is, for example, a user name when the first device logs in the cloud server. The second username may be, for example, a cell phone number, a mailbox address, or a combination of letters and numbers. And when the second user name is a combination of letters and numbers, the second user name has a corresponding relation with at least one of the mobile phone number and the mailbox address. The first device logs in to the cloud server using the second username before step S14.
In this step, the second user name is input by the user and triggered to be sent.
And step S15, the cloud server sends the verification code to the mobile phone number or the mailbox corresponding to the second user name.
Correspondingly, the cloud server also sends reminding information to the second device, wherein the reminding information is used for indicating a channel for receiving the verification code. The reminder information is shown in the exemplary embodiment of fig. 3B-3.
Step S16, the second device sends a request for obtaining a device identifier corresponding to the second username to the cloud server.
Wherein the request includes an authentication code.
Step S17, the cloud server sends the device identifier corresponding to the second username to the second device.
Referring to fig. 3B-4, the device identifier corresponding to the second username may be presented in the form of a pull-down menu.
In step S18, the second device sends a second binding request to the cloud server.
Wherein the second binding request includes at least one device identifier. The at least one device identification comprises a device identification of the first device.
In step S19, the cloud server sends the binding serial numbers to the first device and the second device, respectively.
Wherein, the binding serial number may be a random string.
It is to be appreciated that although in the illustrated embodiment of fig. 6, the second device initiates the binding request, the application is not so limited. In other embodiments, the binding request may be initiated by the first device. In the embodiment in which the first device initiates the binding request, the cloud server interacts with the signaling of the first device, and the user interacts with the first device, similar to the embodiment illustrated in fig. 6, and will not be described in detail here.
In some embodiments, the first device and the second device may establish a P2P connection when the distance between the first device and the second device is within a certain range. Furthermore, the first device and the second device can transmit messages through a P2P connection channel. In other embodiments, when the distance between the first device and the second device is greater than a certain range, the first device and the second device cannot establish P2P connection, and accordingly, the first device and the second device may transmit a message through the cloud server.
The cloud server allocates a call number to the first device and the second device respectively, the first device calls the call number of the second device before calling the second device to play the media content, and the first device can send a message to the second device only after the second device is connected. As can be seen, the conventional binding relationship establishing method is complex in operation flow, and the cloud server needs to set and maintain third party call numbers such as an extensible messaging and presentation protocol (XMPP), a Session Initiation Protocol (SIP), voice over internet protocol (VoIP) based on IP, and the like, and occupies more storage resources. By adopting the embodiment of the application, the cloud server can bind the first equipment and the second equipment according to the identification of the first equipment and the identification of the second equipment, so that the operation process is simpler, and the storage resources can be saved.
The following describes a message transmission method according to an embodiment of the present application with reference to an example.
Illustratively, the first device 100 is, for example, a cell phone. The second device 200 is for example an electronic device provided with a larger size display, the size of the display being for example 1456.4 millimeters (mm) 850.9mm, or 1232.4mm × 717.3 mm. For convenience of description, this specification refers to this type of electronic device as a "large screen". The large screen can play audio and video files. In the embodiment, the projection of the mobile phone to the large screen is taken as an example for description. Accordingly, the cloud server according to the present embodiment is a projection server.
Fig. 7 shows a signaling interaction diagram of a message transmission method 200, where the message transmission method 200 (hereinafter referred to as 200) includes the following steps:
step S201, the mobile phone sends a first login request to the projection server, and the large screen sends a second login request to the projection server.
The first login request comprises a first user name, a first password and a MAC address 'MAC 01' of the mobile phone. The second login request includes a second username, a second password, and a large screen MAC address "MAC 02".
And after the first user name and the first password are verified, the projection server sends login response information to the mobile phone. And after the second user name and the second password are verified, the projection server sends login response information to the large screen. Correspondingly, the mobile phone end and the large screen end can present the interface illustrated in fig. 3B-1.
Further, the user can perform an operation of triggering binding on a large screen. For example, the user performs the corresponding operations in the embodiment of FIG. 3B-1.
Step S202, the large screen sends a first binding request to the projection server.
Step S203, the projection server sends an instruction for acquiring a new device user name to a large screen.
Accordingly, the user may see the interface illustrated in FIG. 3B-2, for example, on a large screen. The user may then enter the first username on the large screen and click the "ok" button. The user can then see an interface as illustrated in fig. 3B-3.
And step S204, sending the first user name to the projection server by a large screen.
In this embodiment, the first user name is, for example, a mobile phone number "13000000000". The mobile phone number "13000000000" is, for example, a number corresponding to the mobile phone according to the present embodiment.
It is noted that the first username can also be a mailbox address or a combination of letters and numbers. When the first user name is a combination of letters and numbers, the first user name has a corresponding relationship with at least one of a mobile phone number and a mailbox address.
In step S205, the projection server sends the verification code to the cell phone number "13000000000".
The verification code is, for example, "057621".
It should be noted that, when the first username is a mailbox address, the projection server may send an authentication code to the corresponding mailbox. When the first username is a combination of letters and numbers, the projection server may send the verification code to a mobile phone number or a mailbox corresponding to the first username.
The user may then enter the verification code "057621" into the corresponding entry box on the large screen, clicking the "OK" button.
Step S206, the large screen sends a request for obtaining the device identifier corresponding to the first user name to the projection server.
Step S207, the projection server sends the device identifier corresponding to the first username to the large screen.
Accordingly, the large screen presents the interface of FIGS. 3B-4. The device identification corresponding to the first username includes "MAC 01".
Then, the user selects, for example, all the device identifiers corresponding to the first username, and further clicks the "add" button.
And step S208, the large screen sends a second binding request to the projection server.
The second binding request includes all device identifications corresponding to the first username.
Step S209, the projection server sends binding serial numbers to the mobile phone and the large screen respectively.
The binding serial number is, for example, "a 34e68 rg". And, the projection server may store the binding serial number "a 34e68 rg" in correspondence with the MAC address "MAC 01" of the handset and the MAC address "MAC 02" of the large screen.
Further, before the user projects the audio and video of the mobile phone end to the large screen, the user can open the camera APP of the mobile phone end, and then the 'projection' option is selected on the shooting interface. Further, in the pop-up "select projected device" pull-down menu, "MAC 02" is clicked.
Step S210, the mobile phone sends a projection request to the projection server.
Wherein the projection request includes "MAC 02".
Step S211, the projection server sends a projection instruction to the large screen.
The projection instruction includes, for example, an IP address of a mobile phone.
And step S212, establishing P2P connection between the large screen and the mobile phone. In step S213, the mobile phone determines the media data 01.
The mobile phone acquires videos through the camera and encodes the acquired videos to obtain video data 01. Meanwhile, the mobile phone collects audio through a microphone, and then encodes the audio to obtain audio data 01. Wherein the media data 01 includes video data 01 and audio data 01.
Step S214, the mobile phone generates a message 01 according to the media data 01.
The data frame structure of message 01 is shown in fig. 5A, and will not be described in detail here.
In this embodiment, the message 01 is, for example, the first message transmitted to a large screen by a mobile phone, and accordingly, the sequence of the video data 01 and the audio data 01 is, for example, the first bit, and based on this, the sequence number field in the message 01 is, for example, 0x 0001. In addition, the video data field in the message 01 is audio data 01, the video data field is video data 01, and the time stamp field indicates the time, for example, time t 0.
Step S215, the mobile phone sends the message 01 to the large screen through the P2P connection.
And step S216, generating a response message r01 by the large screen.
The data frame structure of the response message r01 is shown in fig. 5B, and is not described in detail here. The sequence number of the sequence number field in the response message r01 is 0x0001, and the time indicated by the time stamp field is, for example, time t 1.
In addition, after step S216, the large screen decodes the video data 01 and the audio data 01 in the message 01, respectively, and plays the corresponding audio and video.
And step S217, the large screen sends a response message r01 to the mobile phone through P2P connection.
In step S218, the mobile phone calculates a difference between the time t1 and the time t 0.
Wherein, the difference between the time t1 and the time t0 indicates the transmission delay of the message 01.
Step S219, when the difference is determined to be greater than or equal to 2ms and less than 5ms, the mobile phone determines the media data 02-1; and when the difference is determined to be greater than or equal to 5ms, the mobile phone determines the media data 02-2. That is, after the transmission delay is determined, the mobile phone may determine that different media data needs to be generated according to different transmission delays.
Wherein the media data 02-1 comprises video data 02-1 and audio data 02. Wherein the resolution of the video corresponding to the video data 02-1 is lower than the resolution of the video corresponding to the video data 01. Media data 02-2 includes audio data 02.
In addition, if the difference is less than 2ms, the mobile phone determines the media data 02, where the media data 02 includes the video data 02 and the audio data 02, and the resolution of the video corresponding to the video data 02 is the same as the resolution of the video corresponding to the video data 01.
Step S220, the mobile phone generates a message 02.
The mobile phone generates a corresponding message from the media data. The media data carried in the packet 02 is media data 02, media data 02-1, or media data 02-2. The sequence number of the sequence number field in the packet 02 is, for example, 0x0002, and the time indicated by the time stamp field is, for example, time t 2.
Step S221, the mobile phone sends the message 02 to the large screen through the P2P connection.
Further, the large screen decodes and plays the media data contained in the message 02. And, generating a response message of the message 02 by the large screen, wherein the time indicated by the time stamp field in the response message of the message 02 is, for example, time t 3. Then, the large screen sends a response message of the message 02 to the handset via the projection server. Then, the mobile phone calculates a difference between the time t3 and the time t2, and then determines the media data carried in the packet 03 according to the size of the difference. And will not be described in detail herein.
It is to be understood that fig. 7 is a schematic illustration and not a limitation on the technical solution of the present application. In some other embodiments, the first device may also be another device, and the unique identifiers of the first device and the second device may also be other device identifiers. In addition, in other embodiments, the first device may also play video and audio on the second device through other implementations, such as on demand. And will not be described in detail herein. In addition, the present specification does not show all the implementation scenarios to which the present application is applied, and in other implementation scenarios, other implementation means based on the technical idea of the present application are adopted, and the present application also belongs to the protection scope of the present application.
In summary, with the implementation of the present application, the relevant device only includes necessary data in the message, so that the format of the message can be simplified, transmission resources can be saved, and transmission performance can be improved. Furthermore, the message transmitted by the first device contains audio data and video data which are matched, so that synchronous playing of sound and pictures at the second device end can be ensured. In addition, in the application, the first device determines the media data to be transmitted according to the transmission delay, so that the transmission delay can be avoided, and the real-time playing effect of the second device is optimal.
The above embodiments introduce various aspects of the message transmission method provided in the present application from the perspective of the hardware structure, the software architecture, and the actions performed by the software and hardware of the first device, the second device, and the like. Those skilled in the art will readily appreciate that the processing steps of determining media data, generating messages, etc. described in connection with the embodiments disclosed herein may be implemented not only in hardware, but also in a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
For example, the first device 100 may implement the corresponding functions in the form of functional modules. In some embodiments, an electronic device may include a transceiver module and a processing module. The transceiver module may be configured to perform the receiving and transmitting operations of the first device in any of the embodiments illustrated in fig. 4, 6 and 7. The processing module may be configured to perform operations of the first device other than receiving and transmitting operations in any of the embodiments illustrated in fig. 4, 6 and 7 described above. For specific content, reference may be made to descriptions related to the first device in the embodiments corresponding to fig. 4, fig. 6, and fig. 7, which are not described herein again.
It is understood that the above division of the modules is only a division of logical functions, and in actual implementation, the functions of the transceiver module may be implemented by integrating the transceiver module, and the functions of the processing module may be implemented by integrating the processor module. As shown in fig. 8A, the electronic device 80 includes a transceiver 801 and a processor 802. The transceiver 801 may perform the receiving and transmitting operations of the first device in any of the embodiments illustrated in fig. 4, 6 and 7 described above. The processor 802 may be configured to perform operations of the first device other than receiving and transmitting operations in any of the embodiments illustrated in fig. 4, 6, and 7 described above.
For example, the transceiver 801 may be used to receive transmit instructions. The processor 802 may be configured to determine first media data, the first media data including first video data and first audio data. The processor 802 may be further configured to generate a first packet according to the first media data, where the first packet includes a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data, the function field indicates a use of data transmitted by the first packet, the version number field indicates a version number of a transmission protocol supported by the first packet, the sequence number field indicates a sequence of the first media data in all media data according to a playing sequence, and the timestamp indicates a sending time of the first packet. The transceiver 801 may also be configured to send the first packet to the second device.
For specific content, reference may be made to descriptions related to the first device in the embodiments corresponding to fig. 4, fig. 6, and fig. 7, which are not described herein again.
Fig. 8A is a diagram illustrating an electronic device according to the present application from the perspective of a separate functional entity. In another implementation scenario, the functional entities running independently may be integrated into one hardware entity, such as a chip. Accordingly, as shown in fig. 8B, in this implementation scenario, the electronic device 81 may include a processor 811, a transceiver 812, and a memory 813. The memory 813 may be used to store a program/code preinstalled in the electronic device 81, or may store a code or the like used when the processor 811 executes.
It should be understood that the electronic device 81 of the present application may correspond to the first device in the embodiments corresponding to fig. 4, 6 and 7 of the present application, wherein the transceiver 812 is configured to perform receiving and transmitting of messages and data in any of the embodiments illustrated in fig. 4, 6 and 7, and the processor 811 is configured to perform other processing of the first device besides receiving and transmitting of messages and data in any of the embodiments illustrated in fig. 4, 6 and 7 described above. And will not be described in detail herein.
For specific content, reference may be made to descriptions related to the first device in the embodiments corresponding to fig. 4, fig. 6, and fig. 7, which are not described herein again.
In addition, the application also provides a communication system. As shown in fig. 9A, the communication system 90 provided by the present application includes a first device 01 and a second device 02. The communication system 90 may be used to implement some or all of the embodiments of the message transmission methods shown in fig. 4, 6, and 7.
For example, the first device 01 may be configured to receive a sending instruction, determine first media data, generate a first packet according to the first media data, and send the first packet to the second device, where the first media data includes first video data and first audio data, the first packet includes a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data, the function field indicates a use of data transmitted by the first packet, the version number field indicates a version number of a transmission protocol supported by the first packet, the sequence number field indicates a sorting of the first media data in all media data according to a playing order, and the timestamp indicates a sending time of the first packet. The first device 02 may be configured to receive the first packet.
In other embodiments, the communication system provided by the present application may further include a cloud server. Illustratively, as shown in fig. 9B, the communication system 91 provided by the present application includes a first device 01, a second device 02, and a cloud server 03. The communication system 91 may be used to implement some or all of the embodiments of the message transmission methods shown in fig. 6 and 7. For example, in this embodiment, the first device 01 may also establish a binding relationship with the second device 02 through the cloud server 03.
For specific content, reference may be made to related descriptions in the embodiments corresponding to fig. 4, fig. 6, and fig. 7, which are not described herein again.
In specific implementation, the present application also provides a computer storage medium corresponding to the first device, the second device and the cloud server, where the computer storage medium disposed in any device may store a program, and when the program is executed, part or all of the steps in each embodiment of the message transmission method provided in fig. 4, 6 and 7 may be implemented. The storage medium in any device may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
In this application, the transceiver may be a wired transceiver, a wireless transceiver, or a combination thereof. The wired transceiver may be, for example, an ethernet interface. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless transceiver may be, for example, a wireless local area network transceiver, a cellular network transceiver, or a combination thereof. The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. The memory may include volatile memory (volatile memory), such as random-access memory (RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); the memory may also comprise a combination of memories of the kind described above.
A bus interface may also be included in FIG. 8B, which may include any number of interconnected buses and bridges, with various circuits of memory represented by one or more processors and memory represented by a processor being linked together. The bus interface may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver provides a means for communicating with various other apparatus over a transmission medium. The processor is responsible for managing the bus architecture and the usual processes, and the memory may store messages used by the processor in performing operations.
Those of skill in the art will further appreciate that the various illustrative logical blocks and steps (step) set forth in the embodiments of the present application may be implemented in electronic hardware, computer software, or combinations of both. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The various illustrative logical units and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be provided in a device. In the alternative, the processor and the storage medium may reside as discrete components in a device.
It should be understood that, in the various embodiments of the present application, the size of the serial number of each process does not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic thereof, and should not constitute any limitation to the implementation process of the embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or messaging center to another website site, computer, server, or messaging center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a message storage device including one or more integrated servers, message centers, and the like. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
All parts of the specification are described in a progressive mode, the same and similar parts of all embodiments can be referred to each other, and each embodiment is mainly introduced to be different from other embodiments. In particular, as to the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple and reference may be made to the description of the method embodiments in relevant places.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (17)

1. A method for packet transmission, comprising:
the first equipment receives a sending instruction;
the first device determining first media data, the first media data comprising first video data and first audio data;
the first device generates a first message according to the first media data, wherein the first message comprises a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data and a field of the first video data, the function field indicates the use of data transmitted by the first message, the version number field indicates the version number of a transmission protocol supported by the first message, the sequence number field indicates the sequence of the first media data in all the media data according to a playing sequence, and the timestamp field of the first message indicates the sending time of the first message;
the first equipment sends the first message to second equipment;
the first device receives a first response message from the second device, wherein the first response message is a response message sent by the second device after receiving the first message, the first response message comprises a function field, a version number field, a sequence number field and a timestamp field, and the timestamp field of the first response message indicates the time when the second device receives the first message;
the first device calculates the transmission delay of the first message according to the timestamp field contained in the first response message and the timestamp field contained in the first message;
the first device generates second media data according to the transmission delay of the first message, wherein if the transmission delay of the first message is greater than or equal to a first preset threshold and smaller than a second preset threshold, the second media data comprises second audio data and second video data, and the resolution of a video corresponding to the second video data is lower than that of a video corresponding to the first video data in the first message; or, if the transmission delay of the first packet is greater than or equal to the second preset threshold, the second media data includes second audio data, and the second media data does not include video data;
the first equipment generates a second message according to the second media data;
and the first equipment sends the second message to the second equipment.
2. The method of claim 1,
the first packet further includes a binding sequence number field and an equipment identification field of the first equipment, and the binding sequence number corresponds to the identification of the first equipment and the identification of the second equipment.
3. The method of claim 2,
the first response packet further includes the binding sequence number field and the device identification field of the second device.
4. The method of claim 2 or 3, wherein prior to receiving the transmission instruction, the first device further comprises:
the first equipment sends a login request to a cloud server;
after receiving a login response from the cloud server, the first device sending a first binding request to the cloud server;
after receiving a first binding response from the cloud server, the first device sends a user name corresponding to the second device to the cloud server;
after receiving response information from the cloud server, the first device receives a verification code input by a user, and sends an acquisition request to the cloud server, wherein the acquisition request is used for indicating to acquire a device identifier corresponding to the user name, the device identifier corresponding to the user name comprises a device identifier of the second device, the acquisition request comprises the verification code, and the verification code is generated by the cloud server;
after receiving the device identifier corresponding to the user name from the cloud server, the first device sends a second binding request to the cloud server, where the second binding request includes the device identifier of the second device;
the first device receives the binding serial number from the cloud server.
5. The method of claim 4, wherein after the first device receives the binding serial number from the cloud server, further comprising:
the first device establishes an end-to-end P2P connection with the second device.
6. The method of claim 5, wherein the first device sending the first packet to the second device comprises:
the first device sends the first message to the second device through the P2P connection;
or,
and the first equipment forwards the first message to the second equipment through the cloud server.
7. A message transmission method is applied to a communication system, wherein the communication system comprises a first device and a second device, and the method comprises the following steps:
the first device receives a sending instruction, determines first media data, generates a first message according to the first media data, and sends the first message to the second device, wherein the first media data comprises first video data and first audio data, the first message comprises a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data and a field of the first video data, the function field indicates the use of data transmitted by the first message, the version number field indicates the version number of a transmission protocol supported by the first message, the sequence number field indicates the sequencing of the first media data in all media data according to a playing sequence, and the timestamp field of the first message indicates the sending time of the first message;
the second equipment receives the first message;
the second device responds to the first message to generate a first response message, and sends the first response message to the first device, wherein the first response message is sent after the second device receives the first message, the first response message comprises a function field, a version number field, a sequence number field and a timestamp field, and the timestamp field of the first response message indicates the time when the second device receives the first message;
the first device calculates the transmission delay of the first message according to the timestamp field contained in the first response message and the timestamp field contained in the first message, generates second media data according to the transmission delay of the first message, generates a second message according to the second media data, and sends the second message to the second device;
if the transmission delay of the first message is greater than or equal to a first preset threshold and less than a second preset threshold, the second media data comprise second audio data and second video data, and the resolution of the video corresponding to the second video data is lower than the resolution of the video corresponding to the first video data in the first message;
or, if the transmission delay of the first packet is greater than or equal to the second preset threshold, the second media data includes second audio data, and the second media data does not include video data.
8. The method of claim 7, wherein the communication system further comprises a cloud server, further comprising, prior to the first device receiving the transmission instruction:
the first device sends a login request to the cloud server;
the cloud server sends a login response to the first device;
the first device sends a user name corresponding to the second device to the cloud server;
the cloud server sends an authentication code to a mobile phone number or an email corresponding to the user name;
the first device receives the verification code input by a user and sends an acquisition request to the cloud server, wherein the acquisition request is used for indicating to acquire a device identifier corresponding to the user name, and the device identifier corresponding to the user name comprises a device identifier of the second device;
the cloud server sends the device identification corresponding to the user name to the first device;
the first device sends a second binding request to the cloud server, wherein the second binding request contains a device identifier of the second device;
and the cloud server responds to the second binding request and sends a binding serial number to the first equipment and the second equipment.
9. An electronic device, for use as a first device, comprising a processor and a transceiver, wherein,
the transceiver is used for receiving and sending instructions;
the processor is configured to determine first media data, the first media data including first video data and first audio data;
the processor is further configured to generate a first packet according to the first media data, where the first packet includes a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data, the function field indicates a use of data transmitted by the first packet, the version number field indicates a version number of a transmission protocol supported by the first packet, the sequence number field indicates a sequence of the first media data in all media data according to a play sequence, and the timestamp field of the first packet indicates a sending time of the first packet;
the transceiver is further configured to send the first packet to a second device;
the transceiver is further configured to receive a first response packet from the second device, where the first response packet is a response packet sent by the second device after receiving the first packet, the first response packet includes a function field, a version number field, a sequence number field, and a timestamp field, and the timestamp field of the first response packet indicates a time when the second device receives the first packet;
the processor is further configured to calculate a transmission delay of the first packet according to a timestamp field included in the first response packet and a timestamp field included in the first packet;
the processor is further configured to generate second media data according to the transmission delay of the first packet, where the second media data includes second audio data and second video data if the transmission delay of the first packet is greater than or equal to a first preset threshold and smaller than a second preset threshold, and a resolution of a video corresponding to the second video data is lower than a resolution of a video corresponding to the first video data in the first packet; or, if the transmission delay of the first packet is greater than or equal to the second preset threshold, the second media data includes second audio data, and the second media data does not include video data;
the processor is further configured to generate a second packet according to the second media data;
the transceiver is further configured to send the second packet to the second device.
10. The electronic device of claim 9,
the first packet further includes a binding sequence number field and an equipment identification field of the first equipment, and the binding sequence number corresponds to the identification of the first equipment and the identification of the second equipment.
11. The electronic device of claim 10,
the first response packet further includes the binding sequence number field and the device identification field of the second device.
12. The electronic device of claim 10 or 11,
the transceiver is further used for sending a login request to the cloud server;
the transceiver is further configured to send a first binding request to the cloud server after receiving a login response from the cloud server;
the transceiver is further configured to send a user name corresponding to the second device to the cloud server after receiving the first binding response from the cloud server;
the transceiver is further configured to receive an authentication code input by a user after receiving response information from the cloud server, and send an acquisition request to the cloud server, where the acquisition request is used to instruct to acquire a device identifier corresponding to the user name, where the device identifier corresponding to the user name includes a device identifier of the second device, the acquisition request includes the authentication code, and the authentication code is generated by the cloud server;
the transceiver is further configured to send a second binding request to the cloud server after receiving the device identifier corresponding to the user name from the cloud server, where the second binding request includes the device identifier of the second device;
the transceiver is further configured to receive the binding sequence number from the cloud server.
13. The electronic device of claim 12,
the processor is further configured to establish an end-to-end P2P connection with the second device.
14. The electronic device of claim 13,
the transceiver is further configured to send the first packet to the second device through the P2P connection;
the transceiver is further configured to forward the first packet to the second device via the cloud server.
15. A communication system comprising a first device and a second device, wherein,
the first device is configured to receive a sending instruction, determine first media data, generate a first packet according to the first media data, and send the first packet to the second device, where the first media data includes first video data and first audio data, the first packet includes a function field, a version number field, a sequence number field, a timestamp field, a field of the first audio data, and a field of the first video data, the function field indicates a use of data transmitted by the first packet, the version number field indicates a version number of a transmission protocol supported by the first packet, the sequence number field indicates a sequence of the first media data in all media data according to a playing sequence, and the timestamp field of the first packet indicates a sending time of the first packet;
the second device is configured to receive the first packet, generate a first response packet in response to the first packet, and send the first response packet to the first device, where the first response packet is a response packet sent by the second device after receiving the first packet, the first response packet includes a function field, a version number field, a sequence number field, and a timestamp field, and the timestamp field of the first response packet indicates a time when the second device receives the first packet;
the first device is further configured to calculate, after receiving the first response packet, transmission delay of the first packet according to a timestamp field included in the first response packet and a timestamp field included in the first packet, generate second media data according to the transmission delay of the first packet, generate a second packet according to the second media data, and send the second packet to the second device;
if the transmission delay of the first message is greater than or equal to a first preset threshold and less than a second preset threshold, the second media data comprise second audio data and second video data, and the resolution of the video corresponding to the second video data is lower than the resolution of the video corresponding to the first video data in the first message;
or, if the transmission delay of the first packet is greater than or equal to the second preset threshold, the second media data includes second audio data, and the second media data does not include video data.
16. The communication system of claim 15, further comprising a cloud server, wherein,
the first device is further configured to send a login request to the cloud server;
the cloud server is used for sending a login response to the first equipment;
the first device is further configured to send a user name corresponding to the second device to the cloud server;
the cloud server is further used for sending an authentication code to the mobile phone number or the mailbox corresponding to the user name;
the first device is further configured to receive the verification code input by the user, and send an acquisition request to the cloud server, where the acquisition request is used to instruct to acquire a device identifier corresponding to the user name, and the device identifier corresponding to the user name includes a device identifier of the second device;
the cloud server is further configured to send a device identifier corresponding to the user name to the first device;
the first device is further configured to send a second binding request to the cloud server, where the second binding request includes a device identifier of the second device;
the cloud server is further configured to send a binding serial number to the first device and the second device in response to the second binding request.
17. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 8.
CN201911345024.XA 2019-12-24 2019-12-24 Message transmission method and related equipment Active CN111092898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911345024.XA CN111092898B (en) 2019-12-24 2019-12-24 Message transmission method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911345024.XA CN111092898B (en) 2019-12-24 2019-12-24 Message transmission method and related equipment

Publications (2)

Publication Number Publication Date
CN111092898A CN111092898A (en) 2020-05-01
CN111092898B true CN111092898B (en) 2022-05-10

Family

ID=70396609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911345024.XA Active CN111092898B (en) 2019-12-24 2019-12-24 Message transmission method and related equipment

Country Status (1)

Country Link
CN (1) CN111092898B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079675B (en) * 2020-08-17 2023-06-06 华为技术有限公司 Message processing method, device, terminal equipment and mobile broadband internet surfing equipment
CN112351251A (en) * 2020-10-21 2021-02-09 深圳迈瑞生物医疗电子股份有限公司 Image processing system and terminal device
CN113923488B (en) * 2021-09-15 2024-04-16 青岛海信网络科技股份有限公司 Bus, video flow control method and storage medium
CN116033235B (en) * 2022-12-13 2024-03-19 北京百度网讯科技有限公司 Data transmission method, digital person production equipment and digital person display equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1614994A (en) * 2004-11-30 2005-05-11 北京中星微电子有限公司 Audio and visual frequencies synchronizing method for IP network conference
US6970935B1 (en) * 2000-11-01 2005-11-29 International Business Machines Corporation Conversational networking via transport, coding and control conversational protocols
WO2009137972A1 (en) * 2008-05-13 2009-11-19 中兴通讯股份有限公司 A method and system for transmitting video-audio in same stream and the corresponding receiving method and device
CN103857052A (en) * 2012-11-28 2014-06-11 华为技术有限公司 Wireless scheduling method, device and base station guaranteeing time delay service quality
CN105577649A (en) * 2015-12-11 2016-05-11 中国航空工业集团公司西安航空计算技术研究所 Audio and video stream transmission method
CN105871821A (en) * 2016-03-24 2016-08-17 浙江风向标科技有限公司 Device binding method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6970935B1 (en) * 2000-11-01 2005-11-29 International Business Machines Corporation Conversational networking via transport, coding and control conversational protocols
CN1614994A (en) * 2004-11-30 2005-05-11 北京中星微电子有限公司 Audio and visual frequencies synchronizing method for IP network conference
WO2009137972A1 (en) * 2008-05-13 2009-11-19 中兴通讯股份有限公司 A method and system for transmitting video-audio in same stream and the corresponding receiving method and device
CN103857052A (en) * 2012-11-28 2014-06-11 华为技术有限公司 Wireless scheduling method, device and base station guaranteeing time delay service quality
CN105577649A (en) * 2015-12-11 2016-05-11 中国航空工业集团公司西安航空计算技术研究所 Audio and video stream transmission method
CN105871821A (en) * 2016-03-24 2016-08-17 浙江风向标科技有限公司 Device binding method

Also Published As

Publication number Publication date
CN111092898A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN111092898B (en) Message transmission method and related equipment
EP3562163B1 (en) Audio-video synthesis method and system
CN109981607B (en) Media stream processing method and device, electronic equipment and storage medium
US8300079B2 (en) Apparatus and method for transferring video
US8473994B2 (en) Communication system and method
US9024995B2 (en) Video calling using a remote camera device to stream video to a local endpoint host acting as a proxy
US20180077461A1 (en) Electronic device, interractive mehotd therefor, user terminal and server
WO2016090826A1 (en) Configuration method and device
WO2015117513A1 (en) Video conference control method and system
US20180310033A1 (en) Computer implemented method for providing multi-camera live broadcasting service
EP3059945A1 (en) Method and system for video surveillance content adaptation, and central server and device
WO2021155702A1 (en) Communication processing method and device, terminal, server, and storage medium
JP2014513903A (en) Networking method, server device, and client device
WO2018076358A1 (en) Multimedia information playback method and system, standardized server and broadcasting terminal
EP3316546B1 (en) Multimedia information live method and system, collecting device and standardization server
CN116566963B (en) Audio processing method and device, electronic equipment and storage medium
WO2022267640A1 (en) Video sharing method, and electronic device and storage medium
US11943492B2 (en) Method and system for adding subtitles and/or audio
CN110213531B (en) Monitoring video processing method and device
CN110290224B (en) Resource uploading and forwarding method and device, mobile terminal, gateway and storage medium
CN113726534A (en) Conference control method, conference control device, electronic equipment and storage medium
KR101440131B1 (en) Qos Image Processing System to control Multi-CCTV by Using Mobile Client Terminal and Qos Image Processing Method thereof
CN112866729A (en) Method for reducing live network broadcast time delay and live network broadcast system
WO2022199484A1 (en) Media playback method and apparatus and electronic device
WO2022174664A1 (en) Livestreaming method, apparatus and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant