CN118101669A - Screen projection method and device - Google Patents

Screen projection method and device Download PDF

Info

Publication number
CN118101669A
CN118101669A CN202211487985.6A CN202211487985A CN118101669A CN 118101669 A CN118101669 A CN 118101669A CN 202211487985 A CN202211487985 A CN 202211487985A CN 118101669 A CN118101669 A CN 118101669A
Authority
CN
China
Prior art keywords
screen
information
message
throwing
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211487985.6A
Other languages
Chinese (zh)
Inventor
李建
杨彦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211487985.6A priority Critical patent/CN118101669A/en
Priority to PCT/CN2023/094525 priority patent/WO2024108928A1/en
Publication of CN118101669A publication Critical patent/CN118101669A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0823Network architectures or network communication protocols for network security for authentication of entities using certificates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/083Network architectures or network communication protocols for network security for authentication of entities using passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a screen projection method and a screen projection device, relates to the technical field of computers, and can realize cross-network screen projection. The method comprises the following steps: the cloud device receives a first message sent by a first device. And the cloud device sends the first message to the second device. The first message is used for indicating a second device to perform screen-throwing operation or obtaining a device state of the second device, the first message comprises an object representation JSON character string and uniform resource locator URL information, the JSON character string comprises application layer control information, the application layer control information comprises a control information field and a state information field, and the URL information comprises a keyword of the first protocol.

Description

Screen projection method and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a screen projection method and device.
Background
The screen projection refers to synchronizing media content played on an electronic device (such as a mobile phone, a tablet, a computer and other electronic devices) to another electronic device for playing and viewing in a screen projection manner. For example, multimedia content, game interfaces and the like played by small-screen equipment (such as a mobile phone) can be projected to large-screen equipment (such as an intelligent television) for playing through screen projection, and better use experience can be provided for users by utilizing the display screen of the large-screen equipment for playing.
The screen-throwing equipment and the screen-throwing equipment of the related screen-throwing method are required to be in the same local area network, and can not realize screen-throwing across networks.
Therefore, how to implement cross-network screen projection is one of the problems that those skilled in the art need to solve.
Disclosure of Invention
The embodiment of the application provides a screen projection method and a screen projection device, which can realize cross-network screen projection. In order to achieve the above purpose, the embodiment of the application adopts the following technical scheme:
In a first aspect, an embodiment of the present application provides a screen projection method, where the method includes: the cloud device receives a first message sent by a first device. And the cloud device sends the first message to the second device. The first message is used for indicating a second device to perform screen-throwing operation or obtaining a device state of the second device, the first message comprises an object representation JSON character string and uniform resource locator URL information, the JSON character string comprises application layer control information, the application layer control information comprises a control information field and a state information field, and the URL information comprises a keyword of the first protocol.
In the related screen-throwing method, the screen-throwing equipment and the screen-throwing equipment need to be in the same local area network, point-to-point screen throwing is needed, and cross-network screen throwing cannot be realized. In the method provided by the embodiment of the application, the cloud device can receive the screen-throwing message (i.e. the first message) sent by the screen-throwing device (i.e. the first device) and forward the screen-throwing message to the screen-throwing end (i.e. the second device), and the screen-throwing device do not need to be in the same local area network, but can realize screen throwing across networks through the cloud device.
The control information field includes a screen throw instruction field that includes a URI of the media content to be thrown.
According to the embodiment of the application, the URI of the media content to be screened sent by the screening device can be forwarded to the screened end through the cloud device, so that screening can be realized across networks.
The status information fields include a switch status field, a drop switch status field, a video capability field, an audio capability field, and a play media content URI field.
It can be seen that, in the embodiment of the present application, the on-off status field, the video capability field, the audio capability field, the playing media content URI field and other fields sent by the screen-throwing device may be forwarded to the screen-throwing end by the cloud device to obtain the status of the screen-throwing end, so as to realize screen throwing across networks.
Optionally, the first packet further includes transport layer information, where the transport layer information is used to indicate that a transport layer of the first packet is a transmission control protocol TCP, a secure transport layer protocol TLS, a user datagram protocol UDP, or a data packet transport layer security protocol DTLS.
Optionally, the first message further includes network layer information, where the network layer information is used to indicate that the network layer of the first message is an internet protocol version IPv4 or IPv6.
Optionally, the first message further includes data link layer information, where the data link layer information is used to indicate that the data link layer of the first message is IEEE 802.11 or 802.3.
Optionally, the first message further includes physical layer information, where the physical layer information is used to indicate that the physical layer of the first message is home wireless fidelity Wi-Fi, ethernet, bluetooth or ZigBee.
It is understood that DLNA protocols support only screen casting over internet protocol (Inte rnet Protocol, IP) channels with respect to digital living network alliance. The first protocol supports IP, PLC, bluetooth and other transmission protocol channels, and more transmission channels can be expanded. In addition, the DLAN protocol is a local area network protocol, and does not support remote control. And the DLAN protocol defaults to not support equipment authentication, so that the potential safety hazard of illegal control exists. And the first protocol supports remote control and equipment authentication, so that the user experience and the safety are improved compared with the DLAN protocol.
In one possible implementation, the method may further include: the cloud device receives authentication information sent by electronic equipment, wherein the electronic equipment comprises first equipment and/or second equipment, and the authentication information comprises at least one of a certificate, a License or a personal identification number PIN of a user; the cloud device performs security authentication according to the authentication information; and the cloud device sends the cloud device authentication information to the electronic device.
It should be noted that, before the cloud device establishes connection with the electronic device, the cloud device performs the first protocol authentication through the authentication information of the electronic device, so that connection with an unauthenticated device can be prevented, and therefore connection security is ensured.
In a second aspect, an embodiment of the present application provides another screen projection method, where the method includes: the method comprises the steps that a first message is sent to second equipment, the first message is used for indicating the second equipment to perform screen throwing operation or obtaining equipment states of the second equipment, the first message comprises a JSON character string and URL information, the JSON character string comprises application layer control information, the application layer control information comprises a control information field and a state information field, and the URL information comprises keywords of a first protocol.
The first message may be sent to the second device by way of the cloud device.
In the method provided by the embodiment of the application, the cloud device can receive the screen-throwing message (i.e. the first message) sent by the screen-throwing device (i.e. the first device) and forward the screen-throwing message to the screen-throwing end (i.e. the second device), and the screen-throwing device do not need to be in the same local area network, but can realize screen throwing across networks through the cloud device.
In one possible implementation, the method may further include: receiving state information sent by at least one electronic device, where the at least one electronic device includes the second device, where the state information includes at least one of switch state information, screen-on switch state information, video capability information, audio capability information, or playing media content URI information, where the switch state is used to indicate whether the device is turned on, the screen-on capability information is used to indicate whether the device supports screen-on, the video capability information is used to indicate whether the device supports playing video, the audio capability information is used to indicate whether the device supports playing audio, and the playing media content URI information is used to indicate a U RL of media content currently played by the device.
It can be seen that the on-off state information, the screen-off state information, the video capability information, the audio capability information and the playing media content URI information of the electronic device can be obtained as references for determining the screen-off content and the screen-off device subsequently.
And determining the screen-throwing equipment according to the state information, wherein the screen-throwing equipment is electronic equipment which supports screen throwing and is started in the at least one electronic equipment.
It can be understood that the device supporting the screen projection and in the on state can be projected, so that the device capable of projecting the screen can be determined from the electronic device through the state information of the electronic device.
In one possible implementation, the method further includes: receiving a first user operation, wherein the first user operation is used for selecting a screen throwing device; and determining the second equipment from the screen throwing equipment according to the first user operation.
According to the method, required screen throwing equipment can be flexibly selected from the equipment capable of throwing the screen according to user operation, and then cross-network screen throwing is achieved through cloud equipment.
In one possible implementation, the method further includes: and determining the screen-throwing media content according to the state information, wherein the screen-throwing media content is the screen-throwing content matched with the media type supported by the second equipment in the plurality of screen-throwing contents.
It is understood that the media types supported by different electronic devices are different. For example, a smart speaker supports playing audio but not video, while a smart television supports playing audio and video. Therefore, the screen-throwing media content can be determined according to the media types supported by the screen throwing equipment, so that the user is prevented from selecting the media types which are not supported by the screen throwing equipment, and the user experience is improved.
In one possible implementation, the method further includes: receiving a second user operation, wherein the second user operation is used for selecting screen-casting content; and determining the content to be screen-projected from the screen-projected media content according to the second user operation.
According to the method, the content to be screen-projected can be flexibly selected from the screen-projected media content according to the operation of the user, and then the cross-network screen projection is realized through the cloud device.
In one possible implementation, the method further includes: and receiving the URI of the media content to be screened, which is sent by the third equipment.
It can be seen that, the method provided by the embodiment of the application not only can enable the second device to throw the screen of the media content played by the device, but also can obtain the network address of the media content played by the third device, so that the second device throws the screen of the media content played by the third device, and the user experience is further improved.
In one possible implementation, the method further includes: receiving a play message sent by the second equipment, wherein the play message is used for indicating to play media content to be screened; establishing a media transmission channel with the second device; and sending the media content to be screened to the second equipment through the media transmission channel.
It can be seen that, in the method provided by the embodiment of the application, the second device can access the media server through the network address of the media content to be screened to obtain the media content to be screened, and can also play the mirror image picture (i.e. mirror image screen projection) of the device or the resource in the device through establishing the media transmission channel.
Alternatively, the media transmission channel may be a media transmission channel of a real-time streaming protocol (REAL TIME STREAMING protocol, rlsp) protocol, a media transmission channel of a rapid user datagram protocol internet connection (quick user datagram prot ocol internet connection, quic) protocol, or a media transmission channel of a rapid transport control (kcp) protocol.
Compared with the prior art that a Miracast (a wireless display standard) is adopted to establish an RTSP media transmission channel to transmit media content to be projected, the projection method provided by the embodiment of the application not only can transmit the media content to be projected by establishing the RTSP media transmission channel, but also can transmit the media content to be projected by establishing Quic or kcp media transmission channels, thereby improving the reliability and anti-interference performance of the projection method.
In one possible implementation, the method further includes: and performing compression operation on the media content to be screen-projected, wherein the compression operation comprises at least one of video compression, audio compression or picture compression.
It can be appreciated that by performing compression operation on the media content to be screened, the data volume in the transmission process of the screened media content can be reduced, thereby reducing the time delay and bandwidth cost of screened media content.
For example, video compression may be performed on the on-screen media content, compressing the video content in the on-screen media content into an H264 (a digital video compression format) format.
Also for example, the on-screen media content may be audio compressed, with the audio content in the on-screen media content being compressed into advanced audio coding (advanced audio coding, ACC).
In a third aspect, an embodiment of the present application provides a screen projection device, including: a receiving unit and a transmitting unit. The receiving unit is configured to receive a first packet sent by a first device, where the first packet is used to instruct a second device to perform a screen-throwing operation or obtain a device state of the second device, the first packet includes a JSON string and URL information, the JSON string includes application layer control information, the application layer control information includes a control information field and a state information field, and the URL information includes a keyword of the first protocol. The sending unit is configured to send the first packet to the second device.
In one possible implementation, the control information field includes a screen-break instruction field that includes a URI of the media content to be screened.
In one possible implementation, the status information field includes a switch status field, a drop switch status field, a video capability field, an audio capability field, and a play media content URI field.
In a possible implementation manner, the first packet further includes transport layer information, where the transport layer information is used to indicate that a transport layer of the first packet is TCP, TLS, UDP or DTLS.
In a possible implementation manner, the first message further includes network layer information, where the network layer information is used to indicate that a network layer of the first message is IPv4 or IPv6.
In one possible implementation manner, the first packet further includes data link layer information, where the data link layer information is used to indicate that the data link layer of the first packet is IEEE 802.11 or 802.3.
In one possible implementation manner, the first packet further includes physical layer information, where the physical layer information is used to indicate that the physical layer of the first packet is Wi-Fi, ethernet, bluetooth or ZigBee.
In a possible implementation, the receiving unit is further configured to: receiving authentication information sent by electronic equipment, wherein the electronic equipment comprises first equipment and/or second equipment, and the authentication information comprises at least one of a certificate, a License or a personal identification number PIN of a user; and carrying out security authentication according to the authentication information.
In a possible implementation manner, the sending unit is further configured to: and sending the cloud device authentication information to the electronic device.
In a fourth aspect, an embodiment of the present application provides a screen projection device, including: and a transmitting/receiving unit. The receiving and transmitting unit is configured to send a first packet to a second device, where the first packet is used to instruct the second device to perform a screen-throwing operation or obtain a device state of the second device, the first packet includes a JSON string and URL information, the JSON string includes application layer control information, the application layer control information includes a control information field and a state information field, and the URL information includes a keyword of the first protocol.
In a possible implementation manner, the transceiver unit is further configured to: receiving state information sent by at least one electronic device, where the at least one electronic device includes the second device, where the state information includes at least one of switch state information, screen-on switch state information, video capability information, audio capability information, or playing media content URI information, where the switch state is used to indicate whether the device is turned on, the screen-on capability information is used to indicate whether the device supports screen-on, the video capability information is used to indicate whether the device supports playing video, the audio capability information is used to indicate whether the device supports playing audio, and the playing media content URI information is used to indicate a URL of media content currently played by the device.
In one possible implementation, the apparatus further includes: and a processing unit. The processing unit is used for determining the screen-throwing equipment according to the state information, wherein the screen-throwing equipment is an electronic equipment which supports screen throwing and is started in the at least one electronic equipment.
In a possible implementation manner, the transceiver unit is further configured to: a first user operation is received, the first user operation being for selecting a screen-casting device.
In a possible implementation, the processing unit is further configured to: and determining the second equipment from the screen throwing equipment according to the first user operation.
In a possible implementation, the processing unit is further configured to: and determining the screen-throwing media content according to the state information, wherein the screen-throwing media content is the screen-throwing content matched with the media type supported by the second equipment in the plurality of screen-throwing contents.
In a possible implementation manner, the transceiver unit is further configured to: and receiving a second user operation, wherein the second user operation is used for selecting the screen-throwing content.
In a possible implementation, the processing unit is further configured to: and determining the content to be screen-projected from the screen-projected media content according to the second user operation.
In a possible implementation manner, the transceiver unit is further configured to: and receiving the URI of the media content to be screened, which is sent by the third equipment.
In a possible implementation manner, the transceiver unit is further configured to: receiving a play message sent by the second equipment, wherein the play message is used for indicating to play media content to be screened; establishing a media transmission channel with the second device; and sending the media content to be screened to the second equipment through the media transmission channel.
In a possible implementation manner, the transceiver unit is further configured to: and performing compression operation on the media content to be screen-projected, wherein the compression operation comprises at least one of video compression, audio compression or picture compression.
In a fifth aspect, an embodiment of the present application further provides a screen projection device, where the screen projection device includes: at least one processor, when executing program code or instructions, implements the method described in the first aspect or any possible implementation thereof.
Optionally, the screening device may further comprise at least one memory for storing the program code or instructions.
In a sixth aspect, an embodiment of the present application further provides a chip, including: an input interface, an output interface, at least one processor. Optionally, the chip further comprises a memory. The at least one processor is configured to execute code in the memory, which when executed by the at least one processor, implements the method described in the first aspect or any possible implementation thereof.
Alternatively, the chip may be an integrated circuit.
In a seventh aspect, embodiments of the present application further provide a computer readable storage medium storing a computer program comprising instructions for implementing the method described in the first aspect or any possible implementation thereof.
In an eighth aspect, embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to implement the method as described in the first aspect or any possible implementation thereof.
The screen projection device, the computer storage medium, the computer program product and the chip provided in this embodiment are all used for executing the method provided above, so that the beneficial effects achieved by the screen projection device, the computer storage medium, the computer program product and the chip can refer to the beneficial effects in the method provided above, and are not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of a screen projection method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a screen projection device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another screen projection device according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a chip according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the embodiments of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" and the like in the description of embodiments of the application and in the drawings are used for distinguishing between different objects or between different processes of the same object and not for describing a particular order of objects.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of embodiments of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that in the description of the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more.
First, terms related to the embodiments of the present application will be explained.
The configurator: the logic entity for configuring the application terminal and accessing the application terminal to the local network may be integrated in the application of the devices such as the smart phone, the smart television, the smart sound box, the smart router and the like.
And (3) a controller: in an intelligent home environment, all home application terminals are comprehensively managed or controlled in a local or remote mode, the operation or control behaviors of a user are mainly converted into actual instruction signals, intelligent application service resources of a cloud service platform are coordinated, and the intelligent application service resources are issued to the application terminals for the application terminals to execute specific operations. May include smart home applications (apps), smart speakers, large screens, etc.
And (3) an application terminal: in the intelligent home system, the intelligent home system is connected to a home network, can execute the interactive instruction of the control terminal, and can meet the electronic and informationized products of people for intelligent application requirements of living environment.
And (3) accessing the cloud: and the intelligent household cloud platform is actually accessed by the application terminal.
And (3) equipment cloud: and the application terminal leaves a factory and is provided with a preset intelligent home cloud platform.
The restricted application protocol (the constrained application protocol, coAP) is an application layer protocol based on a request/response interaction model, where accessible resources are uniformly located using a uniform resource locator (uniform resource locator, URL), and clients access server specific resources through the URL of a resource, which CoAP protocol is summarized as follows:
The "Ver" of the CoAP protocol is a version number indicating the version number of the CoAP protocol. Similar to HTTP 1.0HTTP1.1. The version number occupies 2 bits and takes a value of 01B.
The "T" of the CoAP protocol is a message type, and the CoAP protocol defines 4 different types of messages, CON, NON, ACK and RST.
The "TKL" of the CoAP protocol is the CoAP identifier length. The CoAP protocol has two functionally similar identifiers, one is MESSAGE ID (message number) and one is Token (identifier). Wherein each message contains a message number, but an identifier is not necessary for the message.
The "Code" of the CoAP protocol is a function Code/response Code. The Code has different expression forms in the CoAP request message and the response message, the Code occupies one byte, and is divided into two parts, namely a part in the first 3 bits and a part in the second 5 bits, and the Code is written into a c.dd structure for convenience of description. Where 0.XX represents some way of CoAP request and 2.XX, 4.XX or 5.XX represents some specific manifestation of CoAP response.
The CoAP protocol [ MESSAGE ID ] is the message number
The Token of CoAP protocol is the identifier specific content, and the Token length is specified by TKL.
The "Option" of CoAP protocol is a message Option, through which CoAP host, coAP URI, coAP request parameters and load media type, etc. can be set.
The "1111 of the CoAP protocol is the separator between the CoAP message and the specific load. #p #)
The "Payload" of the CoAP protocol is the field of the report Wen Fuzai, supporting JSON string format.
Message queue telemetry transport (Message Queuing Telemetry Transport, MQTT) is an application layer protocol, and the MQTT message protocol is summarized as follows:
Fixed Header (Fixed Header): the MQTT protocol is of many types, such as connect, publish, subscribe, heartbeat, etc. Where a fixed header is necessary, all types of MQTT protocols must contain a fixed header.
Variable Header (Variable Header): variable header, variable header is not meant to be optional, but rather means that this part is present in some protocol types and not in some protocols.
The Payload of the MQTT message protocol is a message carrier, namely the message content. As with the variable header, there are some types of protocols in which there is message content and some types of protocols in which there is no message content.
The object model is a product digital description, defines the functions of the product, abstracts and generalizes the functions of the products of different brands and different classes to form a standard object model, and is convenient for all parties to describe, control and understand the functions of the product by using a unified language.
The screen projection refers to synchronizing media content played on an electronic device (such as a mobile phone, a tablet, a computer and other electronic devices) to another electronic device for playing and viewing in a screen projection manner. For example, multimedia content, game interfaces and the like played by small-screen equipment (such as a mobile phone) can be projected to large-screen equipment (such as an intelligent television) for playing through screen projection, and better use experience can be provided for users by utilizing the display screen of the large-screen equipment for playing.
The screen-throwing equipment and the screen-throwing equipment of the related screen-throwing method are required to be in the same local area network, and can not realize screen-throwing across networks.
Therefore, the embodiment of the application provides a screen projection method which can realize cross-network screen projection. The method is applicable to communication systems. Fig. 1 shows one possible form of presence of the communication system.
The communication system as shown in fig. 1 comprises a cloud device 10 and a plurality of electronic devices, including a first device 20 and a second device 30.
The cloud device may be communicatively connected to the electronic device via ethernet (Wireless Local Area Networks, WLAN), bluetooth, zigBee, programmable logic controller (Programmable Logic Controller, PLC), or other means.
The plurality of electronic devices may be communicatively coupled to the electronic device via ethernet, wireless local area network, bluetooth, zigBee, programmable logic controller, or other means.
The cloud device may be an access cloud device or a device cloud device.
The above devices (electronic devices) may be mobile phones, tablet computers, televisions (smart televisions), set Top Boxes (STBs), smart speakers, game machines, wearable devices, vehicle devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer (UMPC), netbooks, or Personal Digital Assistants (PDAs).
In one possible implementation, the plurality of electronic devices may further include a third device.
Fig. 2 shows a screen projection method provided by an embodiment of the present application, where the method is applicable to the above communication system, and the method includes:
S201, the cloud device receives a first message sent by the first device.
The first message is used for indicating the second equipment to perform screen projection operation or acquiring the equipment state of the second equipment.
The cloud device may receive, for example, a first message of a first protocol sent by the first device through an ethernet network, a wireless local area network, a bluetooth network, a zigbee network, a programmable logic controller network, or other network.
Also for example, the cloud device may receive a first message forwarded by the first device through the local control center.
In one possible implementation, the first message includes an object representation (JSON) string and URL information. The JSON string includes application layer control information including a control information field and a status information field. The URL information includes a keyword of the first protocol.
The first message may be an application layer message superimposed on the CoAP protocol, for example. Wherein, the URL information may be filled in an Option field of the CoAP protocol. The JSON string may be populated at the Payload field of the CoAP protocol.
In another possible implementation, the first message may include a Variable header (Variable header) and a message carrier (Payload). Wherein the variable header comprises a key of the first protocol and the message carrier comprises the control information field and the status information field.
The first message may be an application layer message superimposed on the MQTT protocol, for example. The key of the first protocol may be filled in a Variable header field of the MQTT protocol, and the control information field and the status information field may be filled in a Payload field of the MQTT protocol.
The control information field comprises a screen throwing instruction field, and the screen throwing instruction field comprises a URI of the media content to be thrown.
According to the embodiment of the application, the URI of the media content to be screened sent by the screening device can be forwarded to the screened end through the cloud device, so that screening can be realized across networks.
The status information fields include a switch status field, a screen switch status field, a video capability field, an audio capability field, and a play media content URI field.
It can be seen that, in the embodiment of the present application, the on-off status field, the video capability field, the audio capability field, the playing media content URI field and other fields sent by the screen-throwing device may be forwarded to the screen-throwing end by the cloud device to obtain the status of the screen-throwing end, so as to realize screen throwing across networks.
Optionally, the first packet further includes transport layer information, where the transport layer information is used to indicate that a transport layer of the first packet is a transmission control protocol (Transmission Control Protocol, TCP), a secure transport layer protocol (Transport Layer Security, TLS), a user datagram protocol (User Datagram Protocol, UDP), or a packet transport layer security protocol (Datagram Transport Layer Security, DTLS).
Optionally, the first message further includes network layer information, where the network layer information is used to indicate that a network layer of the first message is an internet protocol version IPv4 or IPv6.
Optionally, the first message further includes data link layer information, where the data link layer information is used to indicate that the data link layer of the first message is Institute of electrical and Electronics (IEEE) 802.11 or 802.3.
Optionally, the first message further includes physical layer information, where the physical layer information is used to indicate that a physical layer of the first message is home wireless fidelity (WIRELESS FIDELITY, wi-Fi), ethernet (Ethernet), bluetooth, or ZigBee (ZigBee).
It is understood that DLNA protocols support only screen casting over internet protocol (Inte rnet Protocol, IP) channels with respect to digital living network alliance. The first protocol supports IP, PLC, bluetooth and other transmission protocol channels, and more transmission channels can be expanded. In addition, the DLAN protocol is a local area network protocol, and does not support remote control. And the DLAN protocol defaults to not support equipment authentication, so that the potential safety hazard of illegal control exists. And the first protocol supports remote control and equipment authentication, so that the user experience and the safety are improved compared with the DLAN protocol.
S202, the cloud device sends a first message to the second device.
The cloud device may send the first message to the second device via an ethernet, a wireless local area network, a bluetooth network, a zigbee network, a programmable logic controller network, or other network, for example.
Correspondingly, after the second device receives the first message, the second device may perform a screen-casting operation according to the control information field in the second message to play the media content of the screen casting or report the corresponding state information according to the state information field in the first message.
In the related screen-throwing method, the screen-throwing equipment and the screen-throwing equipment need to be in the same local area network, point-to-point screen throwing is needed, and cross-network screen throwing cannot be realized. In the method provided by the embodiment of the application, the cloud device can receive the screen-throwing message (i.e. the first message) sent by the screen-throwing device (i.e. the first device) and forward the screen-throwing message to the screen-throwing end (i.e. the second device), and the screen-throwing device do not need to be in the same local area network, but can realize screen throwing across networks through the cloud device.
In one possible implementation, the method may further include:
s203, the cloud device receives authentication information sent by the electronic device.
Wherein the electronic device comprises a first device and/or a second device, and the authentication information comprises at least one of a certificate, a license L icense, or a user personal identification number PIN.
For example, the electronic device may establish a hotspot network (such as a Wi-Fi hotspot, a bluetooth network, or a zigbee network), then receive, by accessing the hotspot network, access cloud information and home lan information (such as Wi-Fi name, password, network address information, or port number) sent by accessing the other devices, and then send a registration request to the cloud device by accessing the cloud information. The registration request is used for requesting to access the first protocol network, and authentication information of the third device may be included in the registration request.
S204, the cloud device performs security authentication according to the authentication information.
The specific method for performing the security authentication according to the authentication information may be any method that can be conceived by those skilled in the art, and the embodiment of the present application is not limited thereto.
S205, the cloud device sends the cloud device authentication information to the electronic device.
For example, the cloud device may send a registration result carrying authentication information of the cloud device to the electronic device.
It should be noted that, before the cloud device establishes connection with the electronic device, the cloud device may prevent connection with an unauthenticated device by performing security through authentication information of the electronic device, so as to ensure security of connection.
The embodiment of the application also provides another screen projection method, which is suitable for the communication system, and comprises the following steps:
S301, the first device sends a first message to the second device.
The first message is used for indicating the second device to perform screen throwing operation or obtaining the device state of the second device, the first message comprises a JSON character string and URL information, the JSON character string comprises application layer control information, the application layer control information comprises a control information field and a state information field, and the URL information comprises a keyword of the first protocol.
Illustratively, the first device may send the first message to the second device via an ethernet, wireless local area network, bluetooth network, zigbee network, programmable logic controller network, or other network.
Also illustratively, the first device may send the first message to the second device via the cloud device.
In the method provided by the embodiment of the application, the cloud device can receive the screen-throwing message (i.e. the first message) sent by the screen-throwing device (i.e. the first device) and forward the screen-throwing message to the screen-throwing end (i.e. the second device), and the screen-throwing device do not need to be in the same local area network, but can realize screen throwing across networks through the cloud device.
The method may further comprise:
S302, receiving state information sent by at least one electronic device.
Wherein the at least one electronic device includes the second device, and the status information includes at least one of switch status information, on-screen switch status information, video capability information, audio capability information, or playing media content URI information.
The above switch state is used to indicate whether the device is on.
The screen throwing capability information is used for indicating whether the equipment supports screen throwing.
The video capability information is used to indicate whether the device supports playing video.
The audio capability information is used to indicate whether the device supports playing audio.
The playing media content URI information is used for indicating the URL of the media content currently played by the device.
It can be seen that the on-off state information, the screen-off state information, the video capability information, the audio capability information and the playing media content URI information of the electronic device can be obtained as references for determining the screen-off content and the screen-off device subsequently.
And determining a screen-throwing device according to the state information, wherein the screen-throwing device is an electronic device which supports screen throwing and is started in the at least one electronic device.
It can be understood that the device supporting the screen projection and in the on state can be projected, so that the device capable of projecting the screen can be determined from the electronic device through the state information of the electronic device.
S303, the first device receives a first user operation and determines the second device from the screen throwing device according to the first user operation.
For example, the first device may present a list of the screenable devices on the user interface, and then determine the second device from the screenable devices presented in the list of the screenable devices according to a user selection.
The first user operation is used for selecting the screen throwing device.
According to the method, required screen throwing equipment can be flexibly selected from the equipment capable of throwing the screen according to user operation, and then cross-network screen throwing is achieved through cloud equipment.
S304, the first equipment determines the screen-throwing media content according to the state information.
The screen-throwing media content is a screen throwing content matched with the media type supported by the second device in a plurality of screen throwing contents.
It is understood that the media types supported by different electronic devices are different. For example, a smart speaker supports playing audio but not video, while a smart television supports playing audio and video. Therefore, the screen-throwing media content can be determined according to the media types supported by the screen throwing equipment, so that the user is prevented from selecting the media types which are not supported by the screen throwing equipment, and the user experience is improved.
S305, the first device receives a second user operation and determines the content to be screened from the screen-screened media content according to the second user operation.
Wherein the second user operation is used for selecting the screen content.
For example, the second device supports audio media types and video media types, and the user device may match and display audio media content and video media types that the user wants to search from the screenable media content according to a search operation of the user, and then determine the content to be screened from the displayed audio media types and video media types according to a selection of the user.
According to the method, the content to be screen-projected can be flexibly selected from the screen-projected media content according to the operation of the user, and then the cross-network screen projection is realized through the cloud device.
S306, the first device receives the URI of the media content to be screened, which is sent by the third device.
It can be seen that, the method provided by the embodiment of the application not only can enable the second device to throw the screen of the media content played by the device, but also can obtain the network address of the media content played by the third device, so that the second device throws the screen of the media content played by the third device, and the user experience is further improved.
For example, a teacher may use an electronic device (i.e., a third device) to send a network address of an educational video to a mobile phone (i.e., a first device) of a parent, after the mobile phone of the parent receives the network address of the educational video, the parent may establish a communication connection with a smart television (i.e., a second device) at home through the mobile phone, and then may use the mobile phone to send a first message to the smart television to instruct the smart television to play the educational video, so that the television at home plays the educational video to learn for children at home, and the children are prevented from watching television at will.
S307, the first device receives the play message sent by the second device.
The play message is used for indicating to play the media content to be screened.
It will be appreciated that in some scenarios the network address of the media content to be screened may point to the first device, and thus the first device may receive the play message sent by the second device. For example, in a screen mirror image screen projection scene, the first device needs to transfer the same screen on the current screen to the second device, so that the first device can receive a play message sent by the second device to request to acquire media content to be projected.
And S308, establishing a media transmission channel with the second equipment and sending the media content to be screened to the second equipment through the media transmission channel.
It can be seen that, in the method provided by the embodiment of the application, the second device can access the media server through the network address of the media content to be screened to obtain the media content to be screened, and can also play the mirror image picture (i.e. mirror image screen projection) of the device or the resource in the device through establishing the media transmission channel.
Alternatively, the media transmission channel may be a media transmission channel of a real-time streaming protocol (REAL TIME STREAMING protocol, rlsp) protocol, a media transmission channel of a rapid user datagram protocol internet connection (quick user datagram prot ocol internet connection, quic) protocol, or a media transmission channel of a rapid transport control (kcp) protocol.
Compared with the prior art that a Miracast (a wireless display standard) is adopted to establish an RTSP media transmission channel to transmit media content to be projected, the projection method provided by the embodiment of the application not only can transmit the media content to be projected by establishing the RTSP media transmission channel, but also can transmit the media content to be projected by establishing Quic or kcp media transmission channels, thereby improving the reliability and anti-interference performance of the projection method.
S309, compressing the media content to be screened.
Wherein the compression operation includes at least one of video compression, audio compression, or picture compression.
It can be appreciated that by performing compression operation on the media content to be screened, the data volume in the transmission process of the screened media content can be reduced, thereby reducing the time delay and bandwidth cost of screened media content.
For example, video compression may be performed on the on-screen media content, compressing the video content in the on-screen media content into an H264 (a digital video compression format) format.
Also for example, the on-screen media content may be audio compressed, with the audio content in the on-screen media content being compressed into advanced audio coding (advanced audio coding, ACC).
The embodiment of the application also provides a data model (namely an object model), which consists of five fields of equipment, service, attribute, method and event.
Device (device): referring to an application terminal, its functionality may be described by different sets of services.
Service (service): refers to an independent and meaningful set of functions that can be multiplexed between different types of application terminals.
Attribute (property): the state and minimum unit of functions of the application terminal are described.
Method (action): to implement a particular function of a service, such a function cannot be accomplished by reading and writing of a single attribute.
Event (event): and the application terminal actively reports the specific information.
In the data model provided by the embodiment of the application, a defined data model of a screen-throwing service is newly added, as shown in table 1:
TABLE 1
The above-described number in one item of attribute/method/event of table 1 is used to indicate the number of attribute/method/event. For example, the number 2 may be defined as beginning an attribute field and the data 3 as beginning a method field.
The property.allowancy field in table 1, i.e. the above-mentioned screen switch status field, is used to indicate whether the device supports screen casting and whether the device allows screen casting.
The property.video cap in table 1, i.e. the video capability field, is used to indicate whether the device supports playing video, and the video format supported by the device.
The property.audiocap in table 1, i.e., the audio capability field, indicates whether the device supports audio and the audio format supported by the device.
The property. Playuri in table 1, i.e. the play media content URI field, is used to indicate the URI of the media content being played by the device.
The action. Screen cast in table 1, that is, the screen instruction field, is used to instruct the device to perform a screen operation, and includes a URI of the media content to be screened.
A screen projection device for performing the screen projection method described above will be described with reference to fig. 3.
It will be appreciated that the screen projection device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules for performing the respective functions. The various example algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or as a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application can divide the functional modules of the screen projection device according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
In the case of dividing the respective functional modules by the respective functions, fig. 3 shows a schematic diagram of one possible composition of the screen projection device involved in the above embodiment, and as shown in fig. 3, the screen projection device 300 may include: a receiving unit 301 and a transmitting unit 302.
The receiving unit 301 is configured to receive a first packet sent by a first device, where the first packet is used to instruct a second device to perform a screen-throwing operation or obtain a device state of the second device, the first packet includes a JSON string and U RL information, the JSON string includes application layer control information, the application layer control information includes a control information field and a state information field, and the URL information includes a keyword of the first protocol.
The sending unit 302 is configured to send the first message to the second device.
In one possible implementation, the control information field includes a screen-throwing instruction field, where the screen-throwing instruction field includes a URI of the media content to be screened.
In one possible implementation, the status information field includes a switch status field, a drop switch status field, a video capability field, an audio capability field, and a play media content URI field.
In one possible implementation manner, the first packet further includes transport layer information, where the transport layer information is used to indicate that a transport layer of the first packet is TCP, TLS, UDP or DTLS.
In one possible implementation manner, the first packet further includes network layer information, where the network layer information is used to indicate that a network layer of the first packet is IPv4 or IPv6.
In one possible implementation manner, the first packet further includes data link layer information, where the data link layer information is used to indicate that the data link layer of the first packet is IEEE 802.11 or 802.3.
In one possible implementation manner, the first packet further includes physical layer information, where the physical layer information is used to indicate that the physical layer of the first packet is Wi-Fi, ethernet, bluetooth or ZigBee.
In a possible implementation manner, the receiving unit 301 is further configured to: receiving authentication information sent by electronic equipment, wherein the electronic equipment comprises first equipment and/or second equipment, and the authentication information comprises at least one of a certificate, a License or a personal identification number PIN of a user; and carrying out security authentication according to the authentication information.
In one possible implementation manner, the sending unit 302 is further configured to: and sending the cloud device authentication information to the electronic device.
In the case of dividing the respective functional modules by the respective functions, fig. 4 shows another possible composition diagram of the screen projection device involved in the above embodiment, and as shown in fig. 4, the screen projection device 400 may include: a transceiver unit 401 and a processing unit 402.
The transceiver 401 is configured to send a first packet to a second device, where the first packet is used to instruct the second device to perform a screen-throwing operation or obtain a device state of the second device, the first packet includes a JSON string and U RL information, the JSON string includes application layer control information, the application layer control information includes a control information field and a state information field, and the URL information includes a keyword of the first protocol.
In one possible implementation manner, the transceiver unit 401 is further configured to: receiving state information sent by at least one electronic device, where the at least one electronic device includes the second device, where the state information includes at least one of switch state information, screen-on switch state information, video capability information, audio capability information, or playing media content URI information, where the switch state is used to indicate whether the device is turned on, the screen-on capability information is used to indicate whether the device supports screen-on, the video capability information is used to indicate whether the device supports playing video, the audio capability information is used to indicate whether the device supports playing audio, and the playing media content URI information is used to indicate a URL of a media content currently played by the device.
In one possible implementation, the processing unit 402 is configured to: and determining a screen-throwing device according to the state information, wherein the screen-throwing device is an electronic device which supports screen throwing and is started in the at least one electronic device.
In one possible implementation manner, the transceiver unit 401 is further configured to: and receiving a first user operation, wherein the first user operation is used for selecting the screen throwing equipment.
In one possible implementation, the processing unit 402 is further configured to: and determining the second device from the screen throwing devices according to the first user operation.
In one possible implementation, the processing unit 402 is further configured to: and determining the screen-throwing media content according to the state information, wherein the screen-throwing media content is the screen-throwing content matched with the media type supported by the second equipment in the plurality of screen-throwing contents.
In one possible implementation manner, the transceiver unit 401 is further configured to: and receiving a second user operation, wherein the second user operation is used for selecting the screen-throwing content.
In one possible implementation, the processing unit 402 is further configured to: and determining the content to be screen-projected from the screen-projected media content according to the second user operation.
In one possible implementation manner, the transceiver unit 401 is further configured to: and receiving the URI of the media content to be screened, which is sent by the third equipment.
In one possible implementation manner, the transceiver unit 401 is further configured to: receiving a play message sent by the second equipment, wherein the play message is used for indicating to play media content to be screened; establishing a media transmission channel with the second equipment; and sending the media content to be screened to the second equipment through the media transmission channel.
In one possible implementation manner, the transceiver unit 401 is further configured to: and compressing the media content to be screened, wherein the compressing operation comprises at least one of video compression, audio compression or picture compression.
The embodiment of the application also provides a chip. Fig. 5 shows a schematic structure of a chip 500. Chip 500 includes one or more processors 501 and interface circuitry 502. Optionally, the chip 500 may further include a bus 503.
The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above screen projection method may be implemented by an integrated logic circuit of hardware in the processor 501 or an instruction in a software form.
Alternatively, the processor 501 may be a general purpose processor, a digital signal processor (DIGITAL SIGNAL proces sing, DSP), an integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The methods and steps disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The interface circuit 502 may be used for transmitting or receiving data, instructions, or information, and the processor 501 may process using the data, instructions, or other information received by the interface circuit 502 and may transmit processing completion information through the interface circuit 502.
Optionally, the chip further comprises a memory, which may include read only memory and random access memory, and provides operating instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-vo latile random access memory, NVRAM).
Optionally, the memory stores executable software modules or data structures and the processor may perform corresponding operations by invoking operational instructions stored in the memory (which may be stored in an operating system).
Optionally, the chip may be used in a screen projection device according to an embodiment of the present application. Alternatively, the interface circuit 502 may be used to output the execution result of the processor 501. The screen projection method provided in one or more embodiments of the present application may refer to the foregoing embodiments, and will not be described herein.
It should be noted that, the functions corresponding to the processor 501 and the interface circuit 502 may be implemented by a hardware design, a software design, or a combination of hardware and software, which is not limited herein.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 100 may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal comput er (UMPC), a netbook, a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a screen-throwing device, or a chip or a functional module in the screen-throwing device.
Fig. 6 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universa l serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (a pplication processor, AP), a modem processor, a graphics processor (graphics processing unit, G PU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/TRANSMI TTER, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber ide ntity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bidirectional synchronous serial bus, and the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface, to implement a touch function of the electronic device 100. The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLA Y SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic L IGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a touch sensor, a video codec, a GPU, a display screen 194, an application processor, and the like.
Wherein the ISP is used to process the data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, etc. format, and it should be understood that in the description of the embodiment of the present application, an image in an RGB format is described as an example, and the embodiment of the present application is not limited to the image format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card.
It should be noted that the electronic device 100 may be a chip system or a device having a similar structure as in fig. 6. The chip system may be composed of a chip or may include a chip and other discrete devices. Acts, terms and the like referred to between embodiments of the present application can be referenced to each other without limitation. The message names of interactions between the devices or parameter names in the messages in the embodiments of the present application are just an example, and other names may be used in specific implementations without limitation. Further, the constituent structure shown in fig. 6 does not constitute a limitation of the electronic device 100, and the electronic device 100 may include more or less components than those shown in fig. 6, or may combine some components, or may be arranged differently, in addition to those shown in fig. 6.
The processors and transceivers described in this disclosure may be implemented on integrated circuits (INTEGRATED CIRCUIT, ICs), analog I C, radio frequency integrated circuits, mixed signal ICs, application SPECIFIC INTEGRATED circuits, a SIC, printed circuit boards (printed circuit board, PCBs), electronic devices, and the like. The processor and transceiver may also be fabricated using a variety of IC process technologies such as complementary metal oxide semiconductor (complementary metal oxid e semiconductor, CMOS), N-type Metal Oxide Semiconductor (NMOs), P-type metal oxide semiconductor (PMOS), bipolar junction transistor (Bipolar Junction Transistor, BJT), bipolar CMOS (BiCMOS), silicon germanium (Si Ge), gallium arsenide (GaAs), and the like.
The embodiment of the application also provides a screen throwing device, which comprises: at least one processor, when executing program code or instructions, implements the relevant method steps to implement the screen projection method in the above embodiments.
Optionally, the apparatus may further comprise at least one memory for storing the program code or instructions.
The embodiment of the application also provides a computer storage medium, wherein computer instructions are stored in the computer storage medium, and when the computer instructions run on the screen projection device, the screen projection device is caused to execute the relevant method steps to realize the screen projection method in the embodiment.
The embodiment of the application also provides a computer program product, which when run on a computer, causes the computer to execute the relevant steps so as to realize the screen projection method in the embodiment.
The embodiment of the application also provides a screen projection device which can be a chip, an integrated circuit, a component or a module. In particular, the apparatus may comprise a processor coupled to a memory for storing instructions, or the apparatus may comprise at least one processor for retrieving instructions from an external memory. When the device is running, the processor can execute the instructions to cause the chip to execute the screen projection method in the above method embodiments.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or a part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above-described method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (37)

1. A screen projection method, comprising:
The cloud device receives a first message sent by a first device, wherein the first message is used for indicating a second device to perform screen-casting operation or obtaining the device state of the second device, the first message comprises an object representation JSON character string and uniform resource locator URL information, the JSON character string comprises application layer control information, the application layer control information comprises a control information field and a state information field, and the URL information comprises a keyword of the first protocol;
and the cloud device sends the first message to the second device.
2. The method of claim 1, wherein the control information field comprises a screen-on instruction field comprising a URI of the media content to be screen-on.
3. The method of claim 1 or 2, wherein the status information field comprises a switch status field, a drop switch status field, a video capability field, an audio capability field, a play media content URI field.
4. A method according to any one of claims 1 to 3, wherein the first message further comprises transport layer information indicating whether the transport layer of the first message is a transmission control protocol TCP, a secure transport layer protocol TLS, a user datagram protocol UDP or a data packet transport layer security protocol DTLS.
5. The method according to any one of claims 1 to 4, wherein the first message further comprises network layer information, the network layer information being used to indicate that the network layer of the first message is an internet protocol version IPv4 or IPv6.
6. The method according to any one of claims 1 to 5, wherein the first message further comprises data link layer information indicating that the data link layer of the first message is institute of electrical and electronics engineers IEEE 802.11 or 802.3.
7. The method according to any one of claims 1 to 6, wherein the first message further comprises physical layer information, the physical layer information being used to indicate that the physical layer of the first message is home wireless fidelity Wi-Fi, ethernet, bluetooth or ZigBee.
8. The method according to any one of claims 1 to 7, further comprising:
The cloud device receives authentication information sent by electronic equipment, wherein the electronic equipment comprises first equipment and/or second equipment, and the authentication information comprises at least one of a certificate, a License or a personal identification number PIN of a user;
the cloud device performs security authentication according to the authentication information;
and the cloud device sends the cloud device authentication information to the electronic device.
9. A screen projection method, comprising:
The method comprises the steps that a first message is sent to second equipment, the first message is used for indicating the second equipment to perform screen throwing operation or obtaining equipment states of the second equipment, the first message comprises a JSON character string and URL information, the JSON character string comprises application layer control information, the application layer control information comprises a control information field and a state information field, and the URL information comprises keywords of a first protocol.
10. The method according to claim 9, wherein the method further comprises:
Receiving state information sent by at least one electronic device, where the at least one electronic device includes the second device, where the state information includes at least one of switch state information, screen-on switch state information, video capability information, audio capability information, or playing media content URI information, where the switch state is used to indicate whether the device is turned on, the screen-on capability information is used to indicate whether the device supports screen-on, the video capability information is used to indicate whether the device supports playing video, the audio capability information is used to indicate whether the device supports playing audio, and the playing media content URI information is used to indicate a URL of media content currently played by the device.
11. The method according to claim 10, wherein the method further comprises:
And determining the screen-throwing equipment according to the state information, wherein the screen-throwing equipment is electronic equipment which supports screen throwing and is started in the at least one electronic equipment.
12. The method of claim 11, wherein the method further comprises:
Receiving a first user operation, wherein the first user operation is used for selecting a screen throwing device;
And determining the second equipment from the screen throwing equipment according to the first user operation.
13. The method according to any one of claims 10 to 12, further comprising:
and determining the screen-throwing media content according to the state information, wherein the screen-throwing media content is the screen-throwing content matched with the media type supported by the second equipment in the plurality of screen-throwing contents.
14. The method of claim 13, wherein the method further comprises:
receiving a second user operation, wherein the second user operation is used for selecting screen-casting content;
And determining the content to be screen-projected from the screen-projected media content according to the second user operation.
15. The method according to any one of claims 9 to 14, further comprising:
and receiving the URI of the media content to be screened, which is sent by the third equipment.
16. The method according to any one of claims 9 to 15, further comprising:
receiving a play message sent by the second equipment, wherein the play message is used for indicating to play media content to be screened;
establishing a media transmission channel with the second device;
And sending the media content to be screened to the second equipment through the media transmission channel.
17. The method according to any one of claims 9 to 16, further comprising:
and performing compression operation on the media content to be screen-projected, wherein the compression operation comprises at least one of video compression, audio compression or picture compression.
18. A screen projection device, comprising: a receiving unit and a transmitting unit;
the receiving unit is configured to receive a first packet sent by a first device, where the first packet is used to instruct a second device to perform a screen-throwing operation or obtain a device state of the second device, the first packet includes a JSON string and URL information, the JSON string includes application layer control information, the application layer control information includes a control information field and a state information field, and the URL information includes a keyword of the first protocol;
The sending unit is configured to send the first packet to the second device.
19. The apparatus of claim 18, wherein the control information field comprises a screen-on instruction field comprising a URI of the media content to be screen-on.
20. The apparatus of claim 18 or 19, wherein the status information field comprises a switch status field, a drop switch status field, a video capability field, an audio capability field, a play media content URI field.
21. The apparatus according to any one of claims 18 to 20, wherein the first message further comprises transport layer information, the transport layer information being used to indicate that a transport layer of the first message is TCP, TLS, UDP or DT LS.
22. The apparatus according to any one of claims 18 to 21, wherein the first message further comprises network layer information, the network layer information being used to indicate that the network layer of the first message is IPv4 or IPv6.
23. The apparatus according to any one of claims 18 to 22, wherein the first message further comprises data link layer information, the data link layer information being used to indicate that the data link layer of the first message is IEEE 802.11 or 802.3.
24. The apparatus according to any one of claims 18 to 23, wherein the first message further comprises physical layer information, the physical layer information being used to indicate that the physical layer of the first message is Wi-Fi, ethernet, bluetooth or ZigBee.
25. The apparatus according to any one of claims 18 to 24, wherein the receiving unit is further configured to:
Receiving authentication information sent by electronic equipment, wherein the electronic equipment comprises first equipment and/or second equipment, and the authentication information comprises at least one of a certificate, a License or a personal identification number PIN of a user;
performing security authentication according to the authentication information;
The transmitting unit is further configured to:
and sending the cloud device authentication information to the electronic device.
26. A screen projection device, comprising: a transmitting/receiving unit;
The receiving and transmitting unit is configured to send a first packet to a second device, where the first packet is used to instruct the second device to perform a screen-throwing operation or obtain a device state of the second device, the first packet includes a JSON string and URL information, the JSON string includes application layer control information, the application layer control information includes a control information field and a state information field, and the URL information includes a keyword of the first protocol.
27. The apparatus of claim 26, wherein the transceiver unit is further configured to:
Receiving state information sent by at least one electronic device, where the at least one electronic device includes the second device, where the state information includes at least one of switch state information, screen-on switch state information, video capability information, audio capability information, or playing media content URI information, where the switch state is used to indicate whether the device is turned on, the screen-on capability information is used to indicate whether the device supports screen-on, the video capability information is used to indicate whether the device supports playing video, the audio capability information is used to indicate whether the device supports playing audio, and the playing media content URI information is used to indicate a URL of media content currently played by the device.
28. The apparatus of claim 27, wherein the apparatus further comprises: a processing unit;
The processing unit is used for determining the screen-throwing equipment according to the state information, wherein the screen-throwing equipment is an electronic equipment which supports screen throwing and is started in the at least one electronic equipment.
29. The apparatus of claim 28, wherein the transceiver unit is further configured to:
Receiving a first user operation, wherein the first user operation is used for selecting a screen throwing device;
the processing unit is further configured to:
And determining the second equipment from the screen throwing equipment according to the first user operation.
30. The apparatus of any one of claims 27 to 29, wherein the processing unit is further configured to:
and determining the screen-throwing media content according to the state information, wherein the screen-throwing media content is the screen-throwing content matched with the media type supported by the second equipment in the plurality of screen-throwing contents.
31. The apparatus of claim 30, wherein the transceiver unit is further configured to:
receiving a second user operation, wherein the second user operation is used for selecting screen-casting content;
the processing unit is further configured to:
And determining the content to be screen-projected from the screen-projected media content according to the second user operation.
32. The apparatus according to any one of claims 26 to 31, wherein the transceiver unit is further configured to:
and receiving the URI of the media content to be screened, which is sent by the third equipment.
33. The apparatus according to any one of claims 26 to 32, wherein the transceiver unit is further configured to:
receiving a play message sent by the second equipment, wherein the play message is used for indicating to play media content to be screened;
establishing a media transmission channel with the second device;
And sending the media content to be screened to the second equipment through the media transmission channel.
34. The apparatus according to any one of claims 26 to 33, wherein the transceiver unit is further configured to:
and performing compression operation on the media content to be screen-projected, wherein the compression operation comprises at least one of video compression, audio compression or picture compression.
35. A screen projection device comprising at least one processor and memory, wherein the at least one processor executes programs or instructions stored in the memory to cause the screen projection device to implement the method of any one of claims 1 to 17.
36. A computer readable storage medium storing a computer program, characterized in that the computer program, when run on a computer or a processor, causes the computer or the processor to implement the method of any one of the preceding claims 1 to 17.
37. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to carry out the method of any one of the preceding claims 1 to 17.
CN202211487985.6A 2022-11-25 2022-11-25 Screen projection method and device Pending CN118101669A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211487985.6A CN118101669A (en) 2022-11-25 2022-11-25 Screen projection method and device
PCT/CN2023/094525 WO2024108928A1 (en) 2022-11-25 2023-05-16 Screen mirroring method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211487985.6A CN118101669A (en) 2022-11-25 2022-11-25 Screen projection method and device

Publications (1)

Publication Number Publication Date
CN118101669A true CN118101669A (en) 2024-05-28

Family

ID=91142799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211487985.6A Pending CN118101669A (en) 2022-11-25 2022-11-25 Screen projection method and device

Country Status (2)

Country Link
CN (1) CN118101669A (en)
WO (1) WO2024108928A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015162550A1 (en) * 2014-04-22 2015-10-29 Screemo Ltd. System, device, and method for interactive communications among mobile devices and ip-connected screens
CN112839238B (en) * 2019-11-22 2023-03-24 腾讯科技(深圳)有限公司 Screen projection playing method and device and storage medium
CN111240620A (en) * 2019-12-31 2020-06-05 创维集团有限公司 Intelligent terminal screen projection processing method and device, computer equipment and medium
CN112019889B (en) * 2020-10-26 2023-06-20 深圳乐播科技有限公司 Cloud-based screen projection system and screen projection method

Also Published As

Publication number Publication date
WO2024108928A1 (en) 2024-05-30

Similar Documents

Publication Publication Date Title
JP7463647B2 (en) Notification processing system, method and electronic device
US10979900B2 (en) Information processing device and information processing method
CN105323628B (en) Cross-screen playing method and system based on DLNA (digital Living network alliance), browser end device and playing device
EP2958026B1 (en) Device recognition method and device
US20170171496A1 (en) Method and Electronic Device for Screen Projection
US20180014063A1 (en) Method and Apparatus for Accessing a Terminal Device Camera to a Target Device
EP2840858A1 (en) Methods for content sharing utilising a compatibility notification to a display forwarding function and associated devices
US20060031367A1 (en) Instant messaging and presence using virtual devices
EP4199422A1 (en) Cross-device audio playing method, mobile terminal, electronic device and storage medium
CN111092898A (en) Message transmission method and related equipment
US20160094603A1 (en) Audio and video sharing method and system
CN112770078B (en) Video conference terminal fusion method and device and readable storage medium
EP3160101A1 (en) Multi-media resource management method, cloud server and electronic device
CN118101669A (en) Screen projection method and device
CN102866567B (en) Intelligent NFC projector and control method thereof
US11943492B2 (en) Method and system for adding subtitles and/or audio
WO2024109586A1 (en) Message processing method and apparatus
CN115174672A (en) Terminal, display device and data transmission method
WO2024087588A1 (en) Packet processing method and apparatus
WO2024152597A1 (en) Memory management method and apparatus
CN115209213B (en) Wireless screen projection method and mobile device
WO2024159925A1 (en) Screen mirroring method, screen mirroring system, and electronic device
CN116723587B (en) Session management method and electronic equipment
WO2023045966A1 (en) Capability sharing method, electronic devices and computer-readable storage medium
CN115297356B (en) Equipment interaction method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication