WO2022042364A1 - 投屏方法、装置及投送端 - Google Patents

投屏方法、装置及投送端 Download PDF

Info

Publication number
WO2022042364A1
WO2022042364A1 PCT/CN2021/112885 CN2021112885W WO2022042364A1 WO 2022042364 A1 WO2022042364 A1 WO 2022042364A1 CN 2021112885 W CN2021112885 W CN 2021112885W WO 2022042364 A1 WO2022042364 A1 WO 2022042364A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
delivered
screen projection
receiving end
screen
Prior art date
Application number
PCT/CN2021/112885
Other languages
English (en)
French (fr)
Inventor
朱冲
吴志鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21860202.7A priority Critical patent/EP4199431A4/en
Priority to US18/043,296 priority patent/US20240015350A1/en
Publication of WO2022042364A1 publication Critical patent/WO2022042364A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/12Arrangements for remote connection or disconnection of substations or of equipment thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]

Definitions

  • the present application belongs to the technical field of screen projection, and in particular relates to a screen projection method, device and delivery terminal.
  • the terminal equipment in the screen projection system includes a sending end and a receiving end.
  • Common screen projection methods include screen mirroring (Miracast) and Digital Live Network Alliance (DLNA).
  • the screen mirroring refers to that the delivery end mirrors the content of its entire screen to the corresponding receiving end.
  • DLNA is a screen projection solution. Based on a set of protocols for interconnection between computers, mobile terminals and consumer appliances, DLNA allows the delivery end to deliver media data to the receiving end, and the receiving end plays it to realize screen projection.
  • the media data includes audio, video, and pictures.
  • the embodiments of the present application provide a screen projection method, device, and delivery terminal, which can solve the problem of poor screen projection effect in the prior art.
  • a first aspect of the embodiments of the present application provides a screen projection method, which is applied to a delivery terminal, including:
  • the delivery end determines the data to be delivered.
  • the delivery end obtains its own first data permission for the data to be delivered, and the receiving end obtains a second data permission for the data to be delivered.
  • the sending end sends the data to be sent to the receiving end by way of screen mirroring.
  • the sending end sends the data to be sent to the receiving end by means of the Digital Living Network Alliance.
  • the data permissions of the data to be delivered by the delivery end and the receiving end are compared. If the delivery terminal has higher authority, the screen mirroring method is used to screen the data to be delivered. At this time, the higher data authority of the delivery end can be fully used to play the data to be delivered. When the data authority of the receiving end is high, the DLNA method is used to screen the data to be delivered. At this time, the higher data authority of the receiving end can be fully used to play the data to be delivered.
  • the automatic selection of the screen projection mode can be realized, and the user is always provided with a higher data authority for the data to be delivered.
  • the sending end sends the data to be sent to the receiving end by way of screen mirroring.
  • a screen mirroring method is automatically selected to perform screen recording on interfaces such as games, desktops, or documents, and the recorded screen capture data is sent to the receiving end in the form of video streams, so as to realize adaptive screen projection of non-media data.
  • the sending end sends the data to be sent to the receiving end by means of the Digital Living Network Alliance.
  • DLNA adopts the method of pushing the URL of the data to be delivered to realize screen projection. Therefore, theoretically, the delivery terminal itself may not need to play the data to be delivered. And users can run the screencasting function in the background and use other functions other than the screencasting function normally.
  • the receiving end in the DLNA mode, the receiving end can realize the playback operation of the data to be delivered. Therefore, the user can directly operate the receiving end when viewing the data to be delivered at the receiving end, so that the screen projection effect is better.
  • the sending end does not need to keep the screen bright, so it is more energy-saving and power-saving and reduces waste of resources.
  • the third possible implementation manner of the first aspect further includes:
  • the sending end obtains the first decoding quality of the data to be sent by itself, and the second decoding quality of the data to be sent by the receiving end.
  • the sending end sends the data to be sent to the receiving end by way of screen mirroring.
  • the sending end sends the data to be sent to the receiving end by means of the Digital Living Network Alliance.
  • the data authority of the data to be delivered is first compared between the sending end and the receiving end. If the data rights are the same, then compare the decoding capabilities of the data to be sent. If the decoding capability of the sending end is stronger, the screen mirroring method is used to cast the screen. At this time, the strong decoding capability of the delivery terminal can be fully utilized to decode and play the data to be delivered. When the decoding capability of the receiving end is stronger, the DLNA method is used for screen projection. At this time, the strong decoding capability of the receiving end can be fully utilized to decode and play the data to be sent.
  • the fourth possible implementation manner of the first aspect further includes:
  • the sending end sends the data to be sent to the receiving end by means of the Digital Living Network Alliance.
  • the display of the data to be sent by the sending end and the receiving end is basically the same.
  • screen mirroring and DLNA may have a big difference in the user's operating experience during the actual screen projection process, so in order to improve the overall screen projection effect, it is convenient to use.
  • a DLNA method is used to perform screen projection, so that the screen projection effect is better.
  • the operation of acquiring the first data permission includes:
  • the delivery end determines, from the installed application programs, a first application program that can play the data to be delivered.
  • the user account in the first application program is acquired, and the first data permission is determined according to the user account.
  • the user account in the application program in the delivery terminal is used. Determines the data permission of the delivery end for the delivery data. This enables the embodiment of the present application to clearly determine whether the delivery end has a user account that plays the data to be delivered.
  • the operation of acquiring the second data permission includes:
  • the sending end sends the first information of the data to be sent to the receiving end.
  • the sending end receives the second data permission returned by the receiving end for the first information.
  • the delivering end sends relevant information (ie, first information) of the data to be delivered to the receiving end.
  • the receiving end determines the data authority of the data to be delivered according to the relevant information. It is then fed back to the delivery end, so as to realize the effective acquisition of the second data permission.
  • a second aspect of the embodiments of the present application provides a screen projection device, including:
  • the data determination module is used to determine the data to be sent when the screen projection function is activated.
  • the authority obtaining module is used for obtaining the first data authority for the data to be delivered by the delivery end and the second data authority for the data to be delivered by the receiver when the data to be delivered is media data.
  • the mirroring screen projection module is used to deliver the data to be delivered to the receiving end by means of screen mirroring when the first data authority is higher than the second data authority.
  • the digital screen projection module is used to deliver the data to be delivered to the receiving end by means of the Digital Living Network Alliance when the first data authority is lower than the second data authority.
  • a third aspect of the embodiments of the present application provides a delivery end, where the delivery end includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes the In the case of a computer program, the delivery end is made to implement the steps of the screen projection method according to any one of the above-mentioned first aspects.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, including: a computer program is stored, and when the computer program is executed by a processor, the delivery end implements any one of the above-mentioned first aspects. Describe the steps of the projection method.
  • a fifth aspect of the embodiments of the present application provides a computer program product, which, when the computer program product runs on the delivery terminal, causes the delivery terminal to execute the screen projection method according to any one of the above-mentioned first aspects.
  • a sixth aspect of the embodiments of the present application provides a chip system, the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, so as to implement any of the foregoing first aspects.
  • the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, so as to implement any of the foregoing first aspects.
  • the chip system may be a single chip or a chip module composed of multiple chips.
  • FIG. 1 is a schematic flowchart of a screen projection method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a screen projection device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a mobile phone to which the screen projection method provided by an embodiment of the present application is applicable.
  • the terminal equipment in the screen projection system includes a sending end and a receiving end.
  • the data to be delivered (that is, the data that needs to be screened) can be divided into two types: media data and non-media data.
  • the media data includes audio, video, and pictures.
  • Non-media data includes all types of data other than media data, such as interfaces and documents.
  • Common screen projection methods include screen mirroring and DLNA.
  • screen mirroring means that the sending end takes a screenshot of the content displayed on its own screen, and sends the recorded screenshot data to the receiving end synchronously, and the receiving end plays it to complete the screen casting.
  • DLNA is a screen projection solution designed to solve the interconnection of wireless and wired networks including computers, consumer appliances (such as TVs) and mobile terminals, enabling the unlimited sharing and growth of digital media and content services.
  • DLNA contains a variety of protocols for interconnection between computers, mobile terminals and consumer appliances. By complying with and using these protocols, the media data of the sending end can be pushed to the receiving end in the form of a data address (Uniform Resource Locator, URL). The receiving end plays according to the received address, so as to realize the screen projection of the media data.
  • URL Uniform Resource Locator
  • the delivery terminal itself can exit the playback interface and perform other operations.
  • the receiving end only needs to play the received screenshot data, so the requirement for the decoding ability of the receiving end to the media data is low.
  • screen mirroring requires the delivery end to have a strong media data decoding capability, so that the delivery end can decode and play media data and display non-media data.
  • DLNA since the receiving end obtains and plays media data through URL, the receiving end needs to have a certain decoding capability. At this time, the decoding capability of the delivery end is required to be lower.
  • the data authority refers to the playback authority of the terminal device to the media data.
  • the data rights include whether the terminal device has the right to completely play the data to be delivered, and whether the terminal device has the right to decrypt if the data to be delivered is encrypted.
  • the data permission will determine whether the terminal device can play the data to be delivered normally. On this basis, when the data authority of the sending end is higher than that of the receiving end, if the DLNA method is used to screen the data to be sent. As a result, the user can only play the data to be delivered in a low-authority manner.
  • the data authority of the sender to be delivered is lower than that of the receiver, if the screen mirroring method is used to screen the data to be delivered, the user can only play the data to be delivered with low authority.
  • the data to be delivered is an online video
  • the sending end is a mobile phone
  • the receiving end is a computer.
  • the mobile phone has a VIP (Very Important Person, VIP) account of the video platform (for playing the online video)
  • the computer has an ordinary account of the video platform. It is assumed that the VIP account can play the online video completely, while the ordinary account can only play the first 30 seconds.
  • VIP Very Important Person
  • the user can only play the first 30 seconds of the network video on the receiving end.
  • the mobile phone has a normal account, and the computer has a VIP account.
  • screen mirroring is used for screen projection, the user can only see the first 30 seconds of network video on the receiving end.
  • the delivery end first identifies whether the data to be delivered is media data when performing screen projection. When it is media data, it will obtain its own data permission to send data and the receiver's data permission to send data, and perform permission comparison. If the data authority of the sending end is higher, the screen mirroring method is used to cast the screen. At this time, you can make full use of the data permission of the delivery terminal to play the data to be delivered. If the data authority of the receiving end is higher, DLNA is used for screen projection. At this time, the data rights of the receiving end can be fully utilized to play the data to be delivered.
  • the automatic selection of the screen projection mode can be realized, and the user is always provided with a higher data authority for the data to be delivered. Therefore, in the actual screen projection process, the user can use a higher data authority to play the data to be delivered, which prevents the situation that the data to be delivered cannot be played normally due to low data authority. Therefore, the embodiments of the present application can achieve a better screen projection effect and improve user experience.
  • the data to be delivered refers to the data that needs to be projected.
  • the data to be delivered is divided into two types: media data and non-media data.
  • the media data includes data such as audio, video, and pictures. Instead of media data, it includes all data except media data, such as data such as display interface and documents.
  • the actual type of data to be delivered needs to be determined according to the actual application scenario.
  • the sending end refers to a terminal device that delivers the data to be delivered.
  • the receiving end refers to the terminal device that receives the data to be delivered and plays or displays it.
  • the embodiments of the present application do not limit the device types of the transmitting end and the receiving end too much, which can be terminal devices such as mobile phones, TVs, personal computers, tablet computers, or wearable devices.
  • the application scenario is determined. For example, when the mobile phone is used to project the screen to the smart watch and the TV in the actual scene, the mobile phone is the transmitting end, and the smart watch and the TV are both the receiving end.
  • the delivery end is the execution body of the screen projection method provided by each embodiment of the present application.
  • Data rights for the purpose of providing differentiated media services for users, or to ensure the security of media data.
  • different data permissions are often set for different terminal devices, so as to flexibly control playback operations of media data by different terminal devices.
  • some video platforms provide users with ordinary accounts and VIP accounts, where the VIP account has the right to play the VIP video completely, while the ordinary account can only watch part of the VIP video.
  • the video data permissions of the terminal device will vary according to the user's account.
  • encryption processing may be performed, and the security level of different terminal devices may be performed.
  • the terminal device Only when the security level reaches a certain threshold, the terminal device has permission to decrypt and play. Or do not encrypt the media data, but set the corresponding security level for different terminal devices. And only when the security level reaches a certain threshold, the terminal device has permission to access and play. Therefore, for a single piece of media data, even the terminal device has the hardware and software configuration for playing the media data. Without the corresponding data rights, it is theoretically difficult to play media data normally.
  • the specific permission content included in the data permission can be set by the technical personnel according to the actual situation. For example, it may only include "whether you have a VIP account" or "security level of the terminal device". It can also include "whether there is a VIP account” and "security level of the terminal device”. It can also include more other rights, such as "whether the encrypted media data can be decrypted" and so on.
  • the data permission can be bound with the terminal device, or can be bound with the user account in the terminal device, or can be bound with the account of the application program in the terminal device.
  • the details need to be determined according to the actual media data. For example, when the media data is offline media data (such as media data stored locally by the terminal device, or for the receiving end, received media data stored locally by the transmitting end), it can be bound with the physical address of the terminal device at this time. or bound to the user account logged in by the terminal device.
  • the media data is online media data (such as online video provided by some video platforms, a specific application program needs to be used to access and play online media data. For example, use the client of the video platform or use a browser to conduct online media data access and playback), it can be bound with the account logged in a specific application in the terminal device.
  • Decoding capability With the advancement of technology, users have higher and higher requirements for the quality of media data, resulting in more and more high-quality media data appearing on the market. Examples include lossless music, 4K movies, and 8k pictures, where 4k refers to a resolution of 3840x2160 and 8k refers to a resolution of 7680x4320. In order to realize the playback of these high-quality media data, the terminal device needs to have the corresponding decoding capability, that is, the capability of restoring the data into playable audio, video or picture.
  • the decoding capabilities of different terminal devices for media data.
  • the decoding ability of the terminal device is weaker than the data to be decoded, it is very likely that the decoding fails and cannot be played, or that although it can be decoded, the indicators such as sound quality, clarity and fluency are degraded.
  • the video decoding capability of the terminal device is weak, for example, 1080P video decoding is supported. If you need to decode and play 4k movies. Playback may be stuttered, or there may be only sound but no image, or even the video cannot be played at all.
  • the audio decoding capability of the terminal device is weak, when the high-quality audio is decrypted at this time, the audio playback may be stuck or even unable to be played. It can be seen from this that the ability of the transmitting end and the receiving end to decode the media data will affect the sound quality, clarity and fluency of the media data playback during the final screen projection, which in turn will have a certain impact on the quality of the final screen projection. Therefore, in some embodiments of the present application, the decoding capabilities of the transmitting end and the receiving end of the data to be transmitted are compared to assist the automatic selection of the screen projection mode.
  • FIG. 1 shows a flowchart of the implementation of the screen projection method provided in Embodiment 1 of the present application, which is described in detail as follows:
  • the delivery end determines the data to be delivered, and identifies whether the data to be delivered is media data.
  • the delivery terminal has a screen projection function.
  • This function may be a built-in function of the delivery terminal software system, or may be a function of an application program installed in the delivery terminal. The details can be determined according to the actual scene.
  • the embodiments of the present application do not limit the activation mode of the screen projection function too much, and can also be determined according to actual scenarios.
  • the user may activate the screen casting function by operating the delivery terminal. It is also possible to remotely start the screen projection function of the delivery terminal by sending a start command to the delivery terminal by other devices. For example, when the screen projection function is a built-in function of the delivery software system, the screen projection function can be set in the system setting function of the software system.
  • the screen projection function When users are using it, they can perform operations in the system settings to activate the screen projection function. Alternatively, you can also provide a shortcut startup method for the screen projection function through desktop icons, floating windows, or pull-down notification bars. When using, the user can quickly start the screen projection function by clicking the corresponding icon or area.
  • the screen-casting function is a function within the application, the application developer can set the startup mode of the screen-casting function according to actual needs. For example, for a video platform, a screen projection icon can be set on the video playback interface. The user activates the screen projection function by clicking the icon.
  • the delivery end will first determine the data to be projected this time (that is, the data to be delivered).
  • the data to be delivered can be determined according to the actual situation, and there are no too many restrictions here. For example, it can be set that in the process of starting the screen projection function, the data to be delivered needs to be selected first. For example, select a video, audio or picture, or select a screen-casting interface or a document.
  • the screen casting function is enabled. At this time, if the screen projection function is activated, the data to be delivered can be determined according to the selected situation.
  • a screen-casting icon can be set on the media data playing interface, and the media data played on the current interface can be set as the corresponding data to be delivered. At this time, if the screen projection function is enabled, the media data played on the current interface can be used as the data to be delivered.
  • the embodiment of the present application will identify whether the data to be delivered is media data. That is, identifying whether the data to be delivered is audio, video or picture. If it is any of the data, it can be determined that the data to be delivered is media data. Conversely, if it is not audio, video or picture, it can be determined that it is not media data (ie, it is non-media data).
  • the delivery end obtains its own first data permission for the data to be delivered, and the receiving end obtains the second data permission for the data to be delivered, and compares the first data permission with the second data Permission level.
  • the data rights (ie, the first data rights and the second data rights) of the data to be sent by the sending end and the receiving end are obtained.
  • the delivery end may acquire the first data permission by reading the data permission of the data to be delivered. For example, in some optional embodiments, when the data permission is judged by "whether you have a VIP account".
  • the application program (ie, the first application program) that can play the data to be delivered can be determined from the application programs installed at the delivery end, such as a certain video player. Then obtain the user account in the application, and determine whether the delivery terminal has the data permission to play the data to be delivered according to the user account.
  • the sending end needs to request the corresponding data from the receiving end.
  • the information related to the data to be delivered may be sent by the delivery end to the receiving end.
  • the receiving end reads its own data authority (ie, the second data authority) of the data to be delivered and sends it to the sending end.
  • the relevant information may be data attribute data of the data to be delivered itself, such as data type, data size, and resolution. It may also be playback information related to the data to be delivered. For example, when the data to be delivered has a security level requirement on the terminal device, and the terminal device security level can only be played when the security level reaches a certain preset level or higher. Playing information at this time may be a security level requirement.
  • the playback information may be the video platform information of the video platform, such as the name of the video platform or the unique identifier of the video platform application. So that the receiving end can uniquely determine the video platform, and determine whether it has the corresponding VIP account of the video platform, or whether it has the corresponding video-on-demand authority.
  • the request operation for the second data permission may be as follows:
  • the sending end sends first information of the data to be sent to the receiving end.
  • the receiving end after receiving the first information, the receiving end obtains the second data authority of the data to be delivered according to the first information, and sends the second data authority to the delivery end.
  • obtaining the second data permission of the data to be sent by the receiving end can be replaced by: sending the first information of the data to be sent to the receiving end, and receiving the information returned by the receiving end for the first information Second Data Rights.
  • the delivering end sends relevant information (ie, first information) of the data to be delivered to the receiving end.
  • relevant information ie, first information
  • the relevant information may be the information of the video platform that plays the data to be delivered.
  • the receiving end After receiving the video platform information, the receiving end reads its own account status in the video platform (account acquisition can be achieved by starting the video platform application program, etc.), and returns the result of whether it is a VIP account to the delivery end. Or it could be the URL of an online video.
  • the receiving end receives the URL, it determines the corresponding video platform or video platform application program according to the URL, then obtains its own account status in the video platform, and returns the result of whether it is a VIP account to the delivery end.
  • the data to be delivered is local media data in the delivery terminal (such as local audio, video, or pictures)
  • a terminal device with a higher security level is required to to play. That is, the "security level of the terminal device" is included in the data permission.
  • the relevant information may include the security level requirements of the local media data. For example, suppose the requirements are: Level 2 and above. After receiving the security level requirement, the receiver sends its own security level to the sender. Or the receiving end judges by itself whether its security level meets the security level requirements, and returns the judgment result to the sending end.
  • the screen projection function in terminal devices there may also be certain differences in the way of obtaining the data permission.
  • the data permission of the terminal device software system and hardware components of the data to be delivered can be read.
  • the security level of software systems and hardware components For example, the security level of software systems and hardware components.
  • the screen projection function exists in the application program one or both of the data permission of the software system and hardware components of the terminal device to be sent to the data, and the data permission of the application program to be sent to the data can be obtained as needed. data permissions.
  • two data rights can be obtained, and the obtained two data rights can be combined to determine the final data rights for the terminal device to send data.
  • the data to be delivered is encrypted, it is required to obtain the decryption authority of the data to be delivered by the terminal device.
  • the terminal device can obtain the decryption authority of the data to be delivered by its own software system and hardware components, and the data authority of the installed application program of the data to be delivered, and combine them. If both cannot be decrypted, it is determined that the terminal device itself has no decryption authority. If there is one or two data rights that can be decrypted, it is determined that the terminal device itself has the right to decrypt.
  • the embodiment of the present application compares the levels of the two data permissions, and determines the end with the higher data permissions.
  • the embodiments of the present application do not limit too much the comparison method of data authority, which can be set by technical personnel.
  • the level of the item may be directly compared. For example, when the data permission only includes "whether you have a VIP account", you can directly compare the VIP account status of the sender and the receiver. If both have a VIP account or neither have a VIP account, it can be determined that the data rights are the same.
  • weight coefficients may be set for different contents. After comparing the contents one by one, the final data authority is determined according to the weight coefficient. Among them, when the weight coefficients of each content are the same, it is equivalent to use the voting method to compare data rights.
  • the data to be delivered is delivered to the receiving end by means of screen mirroring.
  • a screen mirroring method is used to perform screen projection. That is, the delivery end plays the data to be delivered according to its own data authority, for example, using a VIP account to play online videos, or decrypt the encrypted data to be delivered before playing. At the same time, a screen capture of the screen interface when the data to be delivered is played is recorded. The recorded screenshot data is then sent to the receiving end in a video stream or the like. Correspondingly, the receiving end can realize the screencast playback of the data to be delivered by playing the received screen shot data.
  • the user can watch the data to be delivered at the receiving end, and can control the playback operation of the data to be delivered at the sending end. For example, control the video playback progress, audio volume, or image scaling.
  • the embodiments of the present application do not limit the details of the screen mirroring operation too much, which can be set by technical personnel according to requirements.
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • the DLNA method is used for screen projection. That is, the sending end sends the URL of the data to be sent to the receiving end.
  • the receiving end obtains the data to be delivered according to the URL, and plays the data to be delivered according to its own data permissions, for example, using a VIP account to play online videos, or decrypt the encrypted data to be delivered before playing.
  • the user can watch the data to be delivered at the receiving end, and can control the playback operation of the data to be delivered at the receiving end. For example, control the video playback progress, audio volume, or image scaling.
  • This embodiment of the present application does not limit the operation details of the DLNA too much, which can be set by technical personnel according to requirements.
  • the data permissions of the data to be delivered by the delivery end and the receiving end are compared. If the delivery terminal has higher authority, the screen mirroring method is used to screen the data to be delivered. At this time, the higher data authority of the delivery end can be fully used to play the data to be delivered. When the data authority of the receiving end is high, the DLNA method is used to screen the data to be delivered. At this time, the higher data authority of the receiving end can be fully used to play the data to be delivered.
  • the automatic selection of the screen projection mode can be realized, and the user is always provided with a higher data authority for the data to be delivered.
  • the data permissions of the sending end and the receiving end may be the same for the data to be sent. That is, the result of S102 may be that the first data right and the second data right are the same. At this time, no matter which end is selected to play the data to be delivered, theoretically, the influence of the authority on the playback is the same.
  • screen mirroring or DLNA may be used to realize the screen projection of the data to be delivered.
  • the sending end and the receiving end may be far apart. At this time, it is inconvenient for users to operate the sending end in space. For example, when using the desktop computer in the bedroom to cast the screen on the TV in the living room, if the user needs to perform operations such as pausing or fast-forwarding the data to be sent, he needs to run into the bedroom to operate. So very inconvenient.
  • Screen mirroring generally requires the screen of the delivery side to be continuously on, which will result in high power consumption of the delivery side and waste of resources.
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • DLNA adopts the method of pushing the URL of the data to be delivered to realize screen projection. Therefore, theoretically, the delivery terminal itself may not need to play the data to be delivered. And users can run the screencasting function in the background and use other functions other than the screencasting function normally.
  • the receiving end can realize the playback operation of the data to be delivered. Such as audio and video pause, fast forward and volume adjustment, as well as picture zoom in and out. Therefore, the user can directly operate the receiving end when viewing the data to be delivered at the receiving end, without having to go to the sending end to operate.
  • the sending end does not need to keep the screen bright, so it is more energy-saving and power-saving and reduces waste of resources.
  • the DLNA method is used to perform screen projection of the data to be sent. At this time, the effect of screen projection is better for users.
  • the screen projection method decision table corresponding to the embodiment of the present application may be as follows: Table 1:
  • the comparison results of the data permissions of the sending end and the receiving end are divided into four types: the sending end has higher data permissions, the sending end and the receiving end have higher and same data permissions, the receiving end has higher data permissions, and The data permissions of the sender and receiver are both lower and the same, and corresponding screen projection methods are set.
  • the screen projection mode can be determined according to the comparison results, so as to realize an automatic decision on the screen projection mode.
  • S104 and S105 may be combined as: if the first data authority is lower than or equal to the second data authority, the data to be delivered is delivered to the receiving end by means of DLNA.
  • the data permissions of the sending end and the receiving end may be the same for the data to be sent. That is, the result of S102 may be that the first data right and the second data right are the same.
  • the influence of the authority on the playback is the same.
  • the ability of the delivery end to decode the media data will also greatly affect the playback effect of the media data. Such as whether it is stuck and how clear it is. In a screen projection scenario, it will affect the final screen projection effect of the media data.
  • the first data authority if the first data authority is the same as the second data authority, obtain the first decoding capability of the data to be delivered by the sending end and the second decoding capability of the data to be delivered by the receiving end, and compare the first decoding capability with the second decoding capability of the receiving end. Second, the level of decoding ability.
  • decoding is divided into hardware decoding and software decoding.
  • the software decoding refers to using the CPU to decode the media data, which needs to consume the computing resources of the CPU.
  • Hardware decoding is to use other hardware other than CPU to realize the decoding of media data. For example, a GPU or a hardware decoder is used to decode the media data.
  • the decoding capabilities ie, the first decoding capability and the second decoding capability
  • the first decoding capability can be obtained by reading the hardware decoding capability and software decoding capability of the data of the data type to be delivered by the delivery end. For example, when the type of data to be delivered is video, the delivery end reads the decoding capability supported by itself for the video, such as 1080P and 4K.
  • the receiving end needs to read the hardware decoding capability and software decoding capability of the data to be sent according to the data type of the data to be sent, obtain the final decoding capability and feed it back to the sending end.
  • the sending end may send the type of the data to be delivered to the receiving end.
  • the receiving end can determine the type of the data to be delivered through the first information.
  • the first information contains the type of the data to be delivered, or the first information is a URL, and the receiving end determines the type of the data to be delivered through the URL. At this time, there is no need to send the type of data to be sent to the receiving end.
  • the delivery end has a higher decoding capability for the data to be delivered.
  • the type of data to be delivered is video
  • the delivery end supports both 1080P decoding and playback of video and 4K decoding and playback, while the receiving end only supports 1080P decoding and playback of video.
  • the delivery end has a higher 4k decoding capability.
  • the delivery terminal is used to decode and play the data to be delivered.
  • the indicators such as fluency and clarity during decoding and playback will be higher than those of the receiving terminal with weak decoding ability. Therefore, in this embodiment of the present application, a screen mirroring method is used to perform screen projection of the data to be delivered.
  • the delivery end uses its own decoding capability to decode and play the data to be delivered, and perform screen recording and transmission at the synchronization of playback.
  • screen mirroring and screen projection please refer to the description in S103, which will not be repeated here.
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • the second decoding capability is higher than the first decoding capability, it indicates that the receiving end has a higher decoding capability for the data to be sent.
  • the type of data to be delivered is video, and it is assumed that the delivery end only supports 1080P decoding and playback of the video, but the receiving end supports both 1080P decoding and playback of the video and 4K decoding and playback. At this time, the receiving end has a higher 4k decoding capability. At this time, the receiving end is used to decode and play the data to be delivered.
  • the indicators such as fluency and clarity during decoding and playback will be higher than those of the delivery end with weak decoding ability. Therefore, in this embodiment of the present application, DLNA is used to project the data to be delivered. That is, the receiving end uses its own decoding capability to decode and play the data to be sent.
  • S104 For the specific DLNA screen projection description, reference may be made to the description in S104, which will not be repeated here.
  • S106 may be replaced by:
  • the first data authority if the first data authority is the same as the second data authority, obtain the first decoding quality of the data to be delivered by the transmitting end and the second decoding quality of the data to be delivered by the receiving end, and compare the first decoding quality with the second decoding quality of the data to be delivered by the receiving end. Second, the level of decoding quality.
  • the decoding quality refers to the terminal device (including the transmitting end and the receiving end) to perform decoding and playback of the data of the data type to be transmitted with the highest decoding capability the highest playback quality supported. It is a quantitative representation of decoding ability. Taking an example to illustrate, it is assumed that the type of data to be delivered is video, and it is also assumed that the terminal device supports both 1080P decoding and playback and 4K decoding and playback of video. At this time, if the highest decoding capability of the terminal device is used for video decoding and playback, theoretically the highest video playback quality supported is 4k. Therefore, the decoding quality of the terminal device at this time is 4k. In this embodiment of the present application, the receiving end only needs to return the decoding quality (ie, the second decoding quality) of the data to be delivered.
  • S107 and S108 can be replaced by:
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • the decoding capabilities of the transmitting end and the receiving end may also be the same.
  • the embodiment of the present application will preferentially use the DLNA method to perform screen projection, that is, after S106, it also includes:
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • the transmitting end and the receiving end play basically the same display of the data to be transmitted.
  • screen mirroring and DLNA may have a big difference in the user's operating experience during the actual screen projection process, so in order to improve the overall screen projection effect, it is convenient to use.
  • a DLNA method is used to perform screen projection. The specific selection reasons and beneficial effects can be referred to the content description of the embodiment shown in FIG. 3 , which will not be repeated here.
  • S109 may be replaced with: if the first decoding quality is the same as the second decoding quality, the data to be delivered is delivered to the receiving end by means of DLNA.
  • the data to be delivered is delivered to the receiving end by means of DLNA .
  • the corresponding screen projection method decision table may be as follows: Table 2:
  • the comparison results of the decoding quality of the transmitting end and the receiving end are divided into four types: the decoding quality of the transmitting end is higher, the decoding quality of the transmitting end and the receiving end are both high and the same, the decoding quality of the receiving end is higher and The decoding quality of the sending end and the receiving end are both lower and the same, and the corresponding screen projection mode is set.
  • the screen projection mode can be determined according to the comparison results, so as to realize an automatic decision on the screen projection mode.
  • the data authority of the data to be delivered is first compared between the sending end and the receiving end. If the data rights are the same, then compare the decoding capabilities of the data to be sent. If the decoding capability of the sending end is stronger, the screen mirroring method is used to cast the screen. At this time, the strong decoding capability of the delivery terminal can be fully utilized to decode and play the data to be delivered. When the decoding capability of the receiving end is stronger, the DLNA method is used for screen projection. At this time, the strong decoding capability of the receiving end can be fully utilized to decode and play the data to be sent.
  • the embodiments of the present application it is possible to realize the automatic selection of the delivery mode under the condition of the same data authority, and to always provide the user with a strong decoding capability of the data to be projected on the screen. Therefore, in the actual screen projection process, the user can see the playback effect of the data to be delivered under the strong decoding capability. It prevents the situation where the decoding of the delivery data is not smooth or even wrong with the low decoding capability. It makes the effect of the entire projection screen clearer and smoother. Therefore, a better screen projection effect can be achieved and the user experience can be improved.
  • by comparing data rights first and then comparing decoding capabilities The normal playback of the data to be delivered can be guaranteed first. Then choose a more suitable decoding operation to make the whole screen projection process more effective. Therefore, the embodiments of the present application can realize adaptive selection of the screen projection mode, and achieve better screen projection effect.
  • the data to be delivered may also be non-media data.
  • non-media data Such as documentation and game interface, etc.
  • These non-media data cannot be performed in a DLNA manner, so in this embodiment of the present application, a screen mirroring manner is used for screen projection.
  • the embodiments of the present application include:
  • the data to be delivered is delivered to the receiving end by means of screen mirroring.
  • the screen mirroring method is automatically selected to perform screen recording on interfaces such as games, desktops, or documents, and the recorded screen capture data is sent to the receiving end in the form of video streams, so as to realize screen projection.
  • the description of the screen projection method of the screen mirroring reference may be made to the description in S103, which will not be repeated here.
  • FIG. 7 shows a schematic structural diagram of the screen projection device provided by the embodiment of the present application. For convenience of description, only the part related to the embodiment of the present application is shown.
  • the screen projection device includes:
  • the data determination module 71 is configured to determine the data to be delivered when the screen projection function is activated.
  • the permission acquisition module 72 is configured to acquire, when the data to be delivered is media data, a first data permission of the delivery end for the data to be delivered, and a second data permission of the receiving end for the data to be delivered.
  • the mirror screen projection module 73 is configured to deliver the data to be delivered to the receiving end by means of screen mirroring when the first data authority is higher than the second data authority.
  • the digital screen projection module 74 is used to deliver the data to be delivered to the receiving end by means of DLNA when the first data authority is lower than the second data authority.
  • the mirror projection module 73 is also used for:
  • the data to be delivered is not media data
  • the data to be delivered is delivered to the receiving end by means of screen mirroring.
  • the digital screen projection module 74 is also used for:
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • the screen projection device further includes:
  • the decoding capability obtaining module is used for obtaining the first decoding quality of the data to be sent by the sending end and the second decoding quality of the data to be sent by the receiving end when the first data authority and the second data authority are the same.
  • the mirror screen projection module 73 is further configured to deliver the data to be delivered to the receiving end by means of screen mirroring when the first decoding quality is higher than the second decoding quality.
  • the digital screen projection module 74 is configured to deliver the data to be delivered to the receiving end by means of the Digital Living Network Alliance when the first decoding quality is lower than the second decoding quality.
  • the digital screen projection module 74 is also used for:
  • the data to be delivered is delivered to the receiving end by means of DLNA.
  • the permission acquisition module 72 includes:
  • the program determination module is used for determining the first application program that can play the data to be delivered from the application programs installed at the delivery end.
  • the permission acquisition sub-module is used to acquire the user account in the first application, and determine the first data permission according to the user account.
  • the permission acquisition module 72 includes:
  • the information sending module is used for sending the first information of the data to be sent to the receiving end.
  • the permission receiving module is configured to receive the second data permission returned by the receiver for the first information.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • first, second, third, etc. are only used to distinguish the description, and should not be construed as indicating or implying relative importance. It will also be understood that, although the terms “first,” “second,” etc. are used in the text to describe various elements in some embodiments of the present application, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • a first table could be named a second table, and similarly, a second table could be named a first table, without departing from the scope of the various described embodiments.
  • the first table and the second table are both tables, but they are not the same table.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the screen projection method provided by the embodiments of the present application can be applied to mobile phones, tablet computers, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and super mobile personal computers
  • delivery terminals such as (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), the embodiments of the present application do not impose any restrictions on the specific types of delivery terminals.
  • the delivery end may be a station (STAION, ST) in a WLAN, a cellular phone, a Personal Digital Assistant (PDA) device, a handheld device with wireless communication capabilities, a computing device, or a wireless Other processing equipment for modems, in-vehicle equipment, Internet of Vehicles terminals, computers, laptop computers, handheld communication equipment, handheld computing equipment, TV set top box (STB), customer premise equipment (CPE) ) and/or other equipment for communicating on wireless systems and next-generation communication systems, for example, terminal equipment in a 5G network or terminal equipment in a future evolved Public Land Mobile Network (PLMN) network Wait.
  • STAION, ST station
  • WLAN Wireless Local Area Network
  • PDA Personal Digital Assistant
  • PDA Personal Digital Assistant
  • handheld device with wireless communication capabilities a computing device
  • a wireless Other processing equipment for modems, in-vehicle equipment, Internet of Vehicles terminals, computers, laptop computers, handheld communication equipment, handheld computing equipment, TV set top box (STB), customer premise equipment (CPE
  • the wearable device when the delivery end is a wearable device, the wearable device may also be a general term for the intelligent design of daily wear and the development of wearable devices by applying wearable technology, such as glasses and gloves. , watches, clothing and shoes, etc.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothing or accessories. Wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction, and cloud interaction. In a broad sense, wearable smart devices include full-featured, large-scale, complete or partial functions without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, which needs to be used in conjunction with other devices such as smart phones. , such as various types of smart bracelets and smart jewelry that monitor physical signs.
  • the delivery end is a mobile phone as an example
  • FIG. 8 shows a schematic structural diagram of the mobile phone 100 .
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and SIM Card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and SIM Card interface 195
  • the sensor module 180 may include a gyroscope sensor 180A, an acceleration sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an ambient light sensor 180E, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, and a touch sensor 180K (of course, the mobile phone 100 also Other sensors may be included, such as temperature sensors, pressure sensors, distance sensors, air pressure sensors, bone conduction sensors, etc. (not shown).
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the mobile phone 100 .
  • the mobile phone 100 may include more or less components than shown, or some components may be combined, or some components may be separated, or different component arrangements.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or Neural-network Processing Unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or cycled by processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may execute the screen projection method provided by the embodiments of the present application, so as to enrich the screen projection function, improve the flexibility of the screen projection, and improve the user experience.
  • the processor 110 may include different devices. For example, when a CPU and a GPU are integrated, the CPU and the GPU may cooperate to execute the screen projection method provided by the embodiments of the present application. For example, some algorithms in the screen projection method are executed by the CPU, and another part of the algorithms are executed by the GPU. for faster processing efficiency.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the handset 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the display screen 194 may be used to display information entered by or provided to the user as well as various graphical user interfaces (GUIs).
  • GUIs graphical user interfaces
  • the display 194 may display photos, videos, web pages, or documents, or the like.
  • display 194 may display a graphical user interface.
  • the graphical user interface includes a status bar, a hideable navigation bar, a time and weather widget (widget), and an application icon, such as a browser icon.
  • the status bar includes operator name (eg China Mobile), mobile network (eg 4G), time and remaining battery.
  • the navigation bar includes a back button icon, a home button icon, and a forward button icon.
  • the status bar may further include a Bluetooth icon, a Wi-Fi icon, an external device icon, and the like.
  • the graphical user interface may further include a Dock bar, and the Dock bar may include commonly used application icons and the like.
  • the display screen 194 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the processor 110 may control the external audio output device to switch the output audio signal.
  • Camera 193 (front camera or rear camera, or one camera can be both front camera and rear camera) is used to capture still images or video.
  • the camera 193 may include a photosensitive element such as a lens group and an image sensor, wherein the lens group includes a plurality of lenses (convex or concave) for collecting light signals reflected by the object to be photographed, and transmitting the collected light signals to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store operating system, code of application programs (such as camera application, WeChat application, etc.), and the like.
  • the storage data area can store data created during the use of the mobile phone 100 (such as images, videos, etc. collected by the camera application) and the like.
  • the internal memory 121 may also store one or more computer programs 1210 corresponding to the screen projection method provided in this embodiment of the present application.
  • the one or more computer programs 1210 are stored in the aforementioned memory 121 and configured to be executed by the one or more processors 110, and the one or more computer programs 1210 include instructions that may be used to perform the execution of FIG. 1
  • the computer program 1210 may include an account verification module 1211 and a priority comparison module 1212 .
  • the account verification module 1211 is used to authenticate the system authentication accounts of other delivery terminals in the local area network;
  • the priority comparison module 1212 can be used to compare the priority of the audio output request service and the priority of the current output service of the audio output device.
  • the state synchronization module 1213 can be used for synchronizing the device state of the audio output device currently connected by the delivery end to other delivery ends, or synchronizing the device state of the audio output device currently connected by other devices to the local.
  • the processor 110 can control the projection end to process the screen projection data.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the code of the screen projection method provided by the embodiment of the present application may also be stored in an external memory.
  • the processor 110 may run the code of the screen projection method stored in the external memory through the external memory interface 120, and the processor 110 may control the delivery end to perform screen projection data processing.
  • the function of the sensor module 180 is described below.
  • the gyro sensor 180A can be used to determine the movement posture of the mobile phone 100 .
  • the angular velocity of cell phone 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180A can be used to detect the current motion state of the mobile phone 100, such as shaking or stillness.
  • the gyro sensor 180A can be used to detect a folding or unfolding operation acting on the display screen 194 .
  • the gyroscope sensor 180A may report the detected folding operation or unfolding operation to the processor 110 as an event to determine the folding state or unfolding state of the display screen 194 .
  • the acceleration sensor 180B can detect the magnitude of the acceleration of the mobile phone 100 in various directions (generally three axes). That is, the gyro sensor 180A can be used to detect the current motion state of the mobile phone 100, such as shaking or stillness. When the display screen in the embodiment of the present application is a foldable screen, the acceleration sensor 180B can be used to detect a folding or unfolding operation acting on the display screen 194 . The acceleration sensor 180B may report the detected folding operation or unfolding operation to the processor 110 as an event to determine the folding state or unfolding state of the display screen 194 .
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the mobile phone emits infrared light outward through light-emitting diodes.
  • Phones use photodiodes to detect reflected infrared light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the phone. When insufficient reflected light is detected, the phone can determine that there are no objects near the phone.
  • the proximity light sensor 180G can be arranged on the first screen of the foldable display screen 194, and the proximity light sensor 180G can detect the first screen according to the optical path difference of the infrared signal.
  • the gyro sensor 180A (or the acceleration sensor 180B) may send the detected motion state information (such as angular velocity) to the processor 110 .
  • the processor 110 determines, based on the motion state information, whether the current state is the hand-held state or the tripod state (for example, when the angular velocity is not 0, it means that the mobile phone 100 is in the hand-held state).
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100 , which is different from the position where the display screen 194 is located.
  • the display screen 194 of the mobile phone 100 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • Display screen 194 displays an interface of a camera application, such as a viewfinder interface.
  • the wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in handset 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the mobile phone 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the mobile communication module 150 may also be used to exchange information with other delivery terminals, that is, send screen projection-related data to other delivery terminals, or the mobile communication module 150 may be used to receive a screen projection request, and send the The received screencasting request is encapsulated into a message in the specified format.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the wireless communication module 160 may be used to access an access point device, and to send and receive messages to other delivery terminals.
  • the mobile phone 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the mobile phone 100 can receive the key 190 input and generate the key signal input related to the user setting and function control of the mobile phone 100 .
  • the mobile phone 100 can use the motor 191 to generate vibration alerts (eg, vibration alerts for incoming calls).
  • the indicator 192 in the mobile phone 100 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 in the mobile phone 100 is used to connect the SIM card.
  • the SIM card can be contacted and separated from the mobile phone 100 by being inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 .
  • the mobile phone 100 may include more or less components than those shown in FIG. 8 , which are not limited in this embodiment of the present application.
  • the illustrated handset 100 is only an example, and the handset 100 may have more or fewer components than those shown, two or more components may be combined, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on the delivery terminal, the delivery terminal can implement the steps in each of the above method embodiments when executed.
  • An embodiment of the present application further provides a chip system, the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory, so as to implement the steps in the foregoing method embodiments .
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, Read-Only Memory (ROM) ), random access memory (Random Access Memory, RAM), electrical carrier signals, telecommunication signals, and software distribution media, etc.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请提供了投屏方法、装置及投送端,适用于投屏技术领域,该方法包括:若投屏功能被启动,投送端确定待投送数据。若待投送数据为媒体数据,则投送端获取自身对待投送数据的第一数据权限,以及接收端对待投送数据的第二数据权限。若第一数据权限高于第二数据权限,则投送端通过屏幕镜像的方式,将待投送数据投送至接收端。若第一数据权限低于第二数据权限,则投送端通过数字生活网络联盟的方式,将待投送数据投送至接收端。通过本申请实施例,可以实现对投屏方式的自动选取,并始终为用户提供对待投送数据的较高数据权限进行媒体数据播放。因此实际投屏过程中可以达到更好的投屏效果,提升用户体验。

Description

投屏方法、装置及投送端
本申请要求于2020年08月28日提交国家知识产权局、申请号为202010892847.0、申请名称为“投屏方法、装置及投送端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于投屏技术领域,尤其涉及投屏方法、装置及投送端。
背景技术
随着科技的进步,用户拥有的终端设备数量日益增多。终端设备之间的投屏分享,已经成为了用户的一种日常需求。
投屏系统中的终端设备包括投送端和接收端。常见的投屏方式包括屏幕镜像(Miracast)和数字生活网络联盟(Digital Live Network Alliance,DLNA)。其中,屏幕镜像是指投送端将自身整个屏幕的内容镜像投送到对应的接收端。而DLNA则是一种投屏解决方案。基于一套电脑、移动终端和消费电器之间互联互通的协议,DLNA可以让投送端将媒体数据投送至接收端,由接收端进行播放进而实现投屏。其中,媒体数据包括音频、视频和图片等。
实际应用中,用户可以自行选择使用屏幕镜像或DLNA的方式来实现投屏。然而实践发现,无论是屏幕镜像还是DLNA,都经常会出现投屏后媒体数据无法正常播放或者播放质量较差的问题,进而导致最终的投屏效果较差,无法满足用户的实际需求。
发明内容
有鉴于此,本申请实施例提供了投屏方法、装置及投送端,可以解决现有技术中投屏效果较差的问题。
本申请实施例的第一方面提供了一种投屏方法,应用于投送端,包括:
若投屏功能被启动,投送端确定待投送数据。
若待投送数据为媒体数据,则投送端获取自身对待投送数据的第一数据权限,以及接收端对待投送数据的第二数据权限。
若第一数据权限高于第二数据权限,则投送端通过屏幕镜像的方式,将待投送数据投送至接收端。
若第一数据权限低于第二数据权限,则投送端通过数字生活网络联盟的方式,将待投送数据投送至接收端。
在本申请实施例中,针对待投送数据是媒体数据的情况,会比较投送端和接收端对待投送数据的数据权限。若投送端权限更高,则采用屏幕镜像的方式进行待投送数据的投屏。此时可以充分使用投送端较高的数据权限来对待投送数据进行播放操作。而当接收端数据权限较高时,则采用DLNA的方式进行待投送数据的投屏。此时则可以充分使用接收端较高的数据权限来对待投送数据进行播放操作。通过本申请实施例, 可以实现对投屏方式的自动选取,并始终为用户提供对待投送数据的较高数据权限。因此实际投屏过程中,可以使用较高的数据权限进行待投送数据的播放,使得出现因数据权限导致待投送数据无法正常播放的可能性大大降低。最终呈现给用户的更为流畅的投屏效果。
在第一方面的第一种可能的实现方式中,还包括:
若待投送数据不为媒体数据,则投送端通过屏幕镜像的方式,将待投送数据投送至接收端。
当用户需要进行游戏或桌面等界面投屏,或者需要进行文档等投屏时。本申请实施例会自动选用屏幕镜像的方式,对游戏、桌面或文档等界面进行屏幕录制,并将录制的截屏数据以视频流等方式发送给接收端,以实现对非媒体数据的自适应投屏。
在第一方面的第二种可能的实现方式中,还包括:
若第一数据权限与第二数据权限相同,则投送端通过数字生活网络联盟的方式,将待投送数据投送至接收端。
由于DLNA采用的是推送待投送数据URL的方式实现投屏。因此理论上投送端自身可以不用播放待投送数据。且用户可以将投屏功能放在后台运行,并正常使用投屏功能以外的其他功能。另外DLNA的方式下,接收端可以实现对待投送数据的播放操作。因此用户可以在接收端观看待投送数据时,直接操作接收端,使得投屏效果更佳。最后,DLNA方式投屏时,可以投送端可以不保持亮屏,因此更加节能省电,减少资源浪费。
在第一方面的第一种和第二种可能实现方式的基础上,在第一方面的第三种可能的实现方式中,还包括:
若第一数据权限与第二数据权限相同,则投送端获取自身对待投送数据的第一解码质量,以及接收端对待投送数据的第二解码质量。
若第一解码质量高于第二解码质量,则投送端通过屏幕镜像的方式,将待投送数据投送至接收端。
若第一解码质量低于第二解码质量,则投送端通过数字生活网络联盟的方式,将待投送数据投送至接收端。
在本申请实施例中,通过先比较投送端和接收端对待投送数据的数据权限。在数据权限相同的情况下,再比较两者对待投送数据的解码能力。若投送端解码能力更强,则采用屏幕镜像的方式进行投屏。此时可以充分利用投送端较强的解码能力来进行待投送数据的解码播放。而在接收端解码能力更强时,则选用DLNA的方式来进行投屏,此时可以充分利用接收端较强的解码能力来进行待投送数据的解码播放。通过本申请实施例,可以实现在数据权限相同的情况下对投放方式的自动选取,并始终为用户提供对待投屏数据较强的解码能力。因此在实际投屏过程中,用户可以看到在较强解码能力下对待投送数据的播放效果。防止了低解码能力对待投送数据解码不流畅甚至出错的情况。使得整个投屏的效果更为清晰流畅。因此可以实现更好的投屏效果,提升用户体验。
在第一方面的第三种可能实现方式的基础上,在第一方面的第四种可能的实现方式中,还包括:
若第一解码质量与第二解码质量相同,则投送端通过数字生活网络联盟的方式,将待投送数据投送至接收端。
当投送端和接收端解码能力相同时,理论上采用投送端和接收端播放待投送数据的显示基本相同。但屏幕镜像和DLNA对于用户实际投屏过程中的操作体验可能会有较大差异,因此为了提升整体投屏的效果,方便用的操作。本申请实施例会采用DLNA的方式来进行投屏,使得投屏效果更佳。
在第一方面的第一种和第二种可能实现方式的基础上,在第一方面的第五种可能的实现方式中,获取第一数据权限的操作,包括:
投送端从已安装的应用程序中确定出可以播放待投送数据的第一应用程序。
获取第一应用程序中的用户账号,并根据用户账号确定第一数据权限。
在本申请实施例中,通过投送端中应用程序内的用户账号。确定投送端对待投送数据的数据权限。使得本申请实施例可以明确出投送端是否有具有播放待投送数据的用户账号。
在第一方面的第一种和第二种可能实现方式的基础上,在第一方面的第六种可能的实现方式中,获取第二数据权限的操作,包括:
投送端向接收端发送待投送数据的第一信息。
投送端接收接收端针对第一信息返回的第二数据权限。
为了获取接收端对待投送数据的数据权限,本申请实施例中,投送端会将待投送数据的相关信息(即第一信息)发送至接收端。由接收端根据相关信息自行确定对待投送数据的数据权限。再反馈给投送端,从而实现对第二数据权限的有效获取。
本申请实施例的第二方面提供了一种投屏装置,包括:
数据确定模块,用于在投屏功能被启动时,确定待投送数据。
权限获取模块,用于在待投送数据为媒体数据时,获取投送端对待投送数据的第一数据权限,以及接收端对待投送数据的第二数据权限。
镜像投屏模块,用于在第一数据权限高于第二数据权限时,通过屏幕镜像的方式,将待投送数据投送至接收端。
数字投屏模块,用于在第一数据权限低于第二数据权限时,通过数字生活网络联盟的方式,将待投送数据投送至接收端。
本申请实施例的第三方面提供了一种投送端,投送端包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,使得投送端实现如上述第一方面中任一项所述投屏方法的步骤。
本申请实施例的第四方面提供了一种计算机可读存储介质,包括:存储有计算机程序,所述计算机程序被处理器执行时,使得投送端实现如上述第一方面中任一项所述投屏方法的步骤。
本申请实施例的第五方面提供了一种计算机程序产品,当计算机程序产品在投送端上运行时,使得投送端执行上述第一方面中任一项所述投屏方法。
本申请实施例的第六方面提供了一种芯片系统,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述第一方面任一项所述的投屏方法。
其中,芯片系统可以是单个芯片或者,多个芯片组成的芯片模组。
可以理解的是,上述第二方面至第六方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1是本申请一实施例提供的投屏方法的流程示意图;
图2是本申请一实施例提供的投屏方法的流程示意图;
图3是本申请一实施例提供的投屏方法的流程示意图;
图4是本申请一实施例提供的投屏方法的流程示意图;
图5是本申请一实施例提供的投屏方法的流程示意图;
图6是本申请一实施例提供的投屏方法的流程示意图;
图7是本申请实施例提供的投屏装置的结构示意图;
图8是本申请一实施例提供的投屏方法所适用于的手机的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了便于理解本申请,此处先对本申请实施例进行简要说明:
投屏系统中的终端设备包括投送端和接收端。待投送数据(即需要进行投屏的数据)可以分为媒体数据和非媒体数据两种类型。其中媒体数据包括音频、视频和图片等。非媒体数据,则包括媒体数据以外的所有类型数据,例如界面和文档等。常见的投屏方式包括屏幕镜像和DLNA。
其中,屏幕镜像是指投送端对自身屏幕显示的内容进行截屏录制,并将录制的截屏数据同步发送至接收端,由接收端进行播放以完成投屏。
DLNA是一种投屏解决方案,旨在解决电脑、消费电器(如电视)和移动终端在内的无线网络和有线网络的互联互通,使得数字媒体和内容服务的无限制的共享和增长成为可能。DLNA内包含多种电脑、移动终端和消费电器之间互联互通的协议,通过遵守并使用这些协议,可以将投送端的媒体数据以数据地址(Uniform Resource Locator,URL)的形式推送到接收端,由接收端根据接收到的地址进行播放,从而实现媒体数据的投屏。在使用DLNA投屏时,投送端自身可以退出播放界面,并进行其他操作。
对屏幕镜像和DLAN进行比较发现:
一方面,在屏幕镜像的方式中,接收端只需播放接收到的截屏数据即可,因此对接收端对媒体数据的解码能力要求较低。但相应的,屏幕镜像需要投送端具有较强的媒体数据解码能力,使得投送端可以实现对媒体数据的解码播放,以及对非媒体数据的显示。而DLNA中,由于是由接收端通过URL获取媒体数据并进行播放,因此接收端需要具有一定的解码能力。此时对投送端的解码能力要求较低。
另一方面,不同终端设备对待投送数据的数据权限可能会存在一定的差异。其中,数据权限是指终端设备对媒体数据的播放权限。数据权限包括终端设备是否具有完整 播放待投送数据的权限,以及若待投送数据被加密,终端设备是否具有解密权限等。数据权限会决定着终端设备是否可以正常播放待投送数据。在此基础上,在投送端对待投送数据的数据权限高于接收端时,若选用DLNA的方式进行待投送数据的投屏。会导致用户只能以低权限的方式播放待投送数据。反之,在投送端对待投送数据的数据权限低于接收端时,若选用屏幕镜像的方式进行待投送数据的投屏,用户也只能以低权限的方式播放待投送数据。例如假设待投送数据为网络视频,投送端为手机,接收端为电脑。其中若手机内具有视频平台(用于播放该网络视频)的贵宾(Very Important Person,VIP)账号,电脑内具有视频平台的普通账号。假设VIP账号可以完整播放该网络视频,而普通账号仅能播放前30秒。此时若采用DLNA的方式进行网络视频的投屏,会导致用户在接收端仅能播放前30秒的网络视频。反之,若手机内具有普通账号,而电脑内具有VIP账号。此时若采用屏幕镜像的方式进行投屏,则用户在接收端仅能看到前30秒的网络视频。
由上述对屏幕镜像和DLNA的比较分析可知,投送端和接收端的数据权限及解码能力都会影响最终对待投送数据的投屏效果,即对最终待投送数据在接收端中是否可以正常播放、流畅度如何以及清晰度如何等造成影响。实际应用中,用户可以自行选择使用屏幕镜像或DLNA的方式来实现投屏。但无论何种方式,均仅能使用到投送端或接收端一端的数据权限和解码能力。如屏幕镜像使用的是投送端的数据权限和解码能力,而DLNA使用的是接收端的数据权限和解码能力。因此若用户选取的投屏方式不当,则会导致最终出现无法正常播放投屏数据,或者播放的音质、清晰度和流畅度较差的情况,即导致投屏效果较差,用户体验下降。
为了提升投屏效果,本申请实施例中,投送端在进行投屏时首先会识别待投送数据是否为媒体数据。当为媒体数据时,则会获取自身对待投送数据的数据权限以及接收端对待投送数据的数据权限,并进行权限比对。若投送端的数据权限更高,则采用屏幕镜像的方式进行投屏。此时可以充分利用投送端数据权限来播放待投送数据。而若接收端数据权限更高,则采用DLNA的方式进行投屏。此时则可以充分利用接收端的数据权限来播放待投送数据。通过本申请实施例,可以实现对投屏方式的自动选取,并始终为用户提供对待投送数据的较高数据权限。因此实际投屏过程中,用户可以使用较高的数据权限进行待投送数据的播放,防止了由于低数据权限导致待投送数据无法正常播放的情况出现。因此本申请实施例可以达到更好的投屏效果,提升用户体验。
同时,对本申请实施例中可能涉及到的一些名词进行说明如下:
待投送数据:待投送数据是指需要进行投屏的数据。在本申请实施例中,将待投送数据分为媒体数据和非媒体数据两种类型。其中媒体数据包括音频、视频和图片等数据。而非媒体数据,则包括除媒体数据以外的所有数据,例如显示界面和文档等数据。待投送数据的实际类型,需根据实际应用场景确定。
投送端和接收端:在本申请实施例中,投送端是指投送待投送数据的终端设备。接收端则是指接收待投送数据并进行播放或显示的终端设备。在支持DLNA的基础上,本申请实施例不对投送端和接收端的设备类型进行过多限定,均可以是手机、电视、个人电脑、平板电脑或可穿戴设备等终端设备,具体可根据实际的应用场景确定。例如,当实际场景中是由手机向智能手表和电视进行投屏时,此时手机就是投送端,智 能手表和电视均为接收端。其中,投送端为各个本申请实施例提供的投屏方法的执行主体。
数据权限(包括第一数据权限和第二数据权限):出于为用户提供差异化的媒体服务,或者为了保障媒体数据的安全性等目的。实际应用中,经常会对不同终端设备设置不同的数据权限,以灵活控制不同终端设备对媒体数据的播放操作。例如一些视频平台中,会为用户提供普通账号和VIP账号,其中VIP账号具有完整播放VIP视频的权限,而普通账号仅能观看VIP视频的部分内容。用户在终端设备内观看视频平台内的VIP视频时,根据用户的账号不同,终端设备所具有的视频数据权限也会存在差异。又例如,对于一些安全级别较高的媒体数据而言,可能会进行加密处理,并会对不同的终端设备安全等级。仅在安全级别达到一定阈值时,终端设备才有权限进行解密和播放。或者不对媒体数据进行加密,但对不同的终端设备设置相应的安全级别。并仅在安全级别达到一定阈值时,终端设备才有权限进行访问和播放。因此对于单个媒体数据而言,即使终端设备拥有播放该媒体数据的软硬件配置。若没有相应的数据权限,理论上也难以正常播放媒体数据。在本申请实施例中,数据权限包含的具体权限内容,可由技术人员根据实际情况设定。例如可以仅包含“是否具有VIP账号”或者“终端设备的安全级别”。也可以同时包含“是否具有VIP账号”以及“终端设备的安全级别”。亦可以包含更多的其他权限,例如“是否可以对已加密的媒体数据进行解密”等。
应当说明地,数据权限可以与终端设备进行绑定,也可以与终端设备中的用户账号绑定,或者是与终端设备中应用程序的账号进行绑定。具体需根据实际的媒体数据情况确定。例如当媒体数据为离线媒体数据(如终端设备本地存储的媒体数据,或者对于接收端而言,接收到的投送端本地存储的媒体数据)时,此时可以与终端设备的物理地址进行绑定,或者与终端设备登录的用户账号绑定。而当媒体数据为在线媒体数据(如一些视频平台提供的网络视频,此时需要使用特定的应用程序进行在线媒体数据的访问和播放。例如使用视频平台的客户端或者利用浏览器等进行在线媒体数据的访问和播放)时,则可以与终端设备内特定应用程序中登录的账号进行绑定。
解码能力:随着科技的进步,用户对媒体数据的质量要求越来越高,导致市面上出现越来越多的高质量媒体数据。例如无损音乐、4K电影和8k图片,其中4k是指分辨率为3840x2160,8k是指分辨率为7680x4320。而为了实现对这些高质量媒体数据的播放,需要终端设备具有相应的解码能力,即将数据还原成可播放的音频、视频或图片的能力。
实际应用中,不同终端设备对媒体数据的解码能力会存在一定的差异。当终端设备的解码能力弱于所需解码的数据时,极有可能会出现解码失败无法播放,或者虽然可以解码但音质、清晰度和流畅度等指标下降的情况。例如当终端设备视频解码能力较弱时,如支持1080P视频解码。若需要对4k电影进行解码播放。可能会出现播放卡顿,或者只有声音没有图像,甚至完全无法播放视频的情况。同理,若终端设备对音频的解码能力较弱,此时对高质量音频进行解密时,亦有可能出现音频播放卡顿甚至无法播放的情况。由此可知,投送端和接收端对媒体数据的解码能力如何,会影响最终投屏时媒体数据播放的音质、清晰度和流畅度等指标,进而对最终投屏的质量造成一定的影响。因此,本申请的一些实施例中,会比较投送端和接收端对待投送数据的 解码能力,以协助投屏方式的自动选择。
为了说明本申请所述的技术方案,通过具体实施例来进行说明。
图1示出了本申请实施例一提供的投屏方法的实现流程图,详述如下:
S101,若投屏功能被启动,投送端确定待投送数据,并识别待投送数据是否为媒体数据。
在本申请实施例中,投送端之中具有投屏功能。该功能可以是投送端软件系统内置的功能,也可以是投送端内安装的应用程序的功能。具体可根据实际场景确定。同时,本申请实施例亦不对投屏功能的启动方式进行过多限定,亦可根据实际场景确定。一方面,可以是用户通过对投送端进行操作,启动投屏功能。也可以是以其他设备向投送端发送启动指令的方式,来远程启动投送端的投屏功能。例如,当投屏功能是投送端软件系统内置的功能时,可以将投屏功能的设置于软件系统的系统设置功能之中。用户在使用时,可以在系统设置之中进行操作启动投屏功能。或者也可以通过桌面图标、悬浮窗或者下拉通知栏等方式,为投屏功能提供快捷启动方式。用户在使用时,可以通过点击对应的图标或区域实现对投屏功能的快捷启动。当投屏功能时应用程序内的功能时,可由应用程序开发方根据实际需求,对投屏功能的启动方式进行设定。例如对于视频平台而言,可以在视频播放界面设置一个投屏图标。用户通过点击该图标,实现对投屏功能的启动。
在投屏功能被启动后,投送端首先会确定此次需要进行投屏的数据(即待投送数据)。根据实际对投屏功能设置的不同,对待投送数据的确定方式亦可以存在一定的差异。具体可根据实际场景情况确定,此处不做过多限定。例如,可以设置为在启动投屏功能的过程中,需要先选定待投送数据。如选定一个视频、音频或者图片,亦或者选择投屏界面或某个文档等。在选定待投送数据之后,才开启投屏功能。此时,若投屏功能被启动,即可根据选定的情况来确定待投送数据。又例如,针对视频平台等应用程序,可以在对媒体数据的播放界面设置投屏图标,并将当前界面播放的媒体数据设置为对应的待投送数据。此时若投屏功能被开启,将当前界面播放的媒体数据作为待投送数据即可。
在确定出待投送数据之后,本申请实施例会识别待投送数据是否为媒体数据。即识别待投送数据是否为音频、视频或图片。若是其中任意一种数据,则可判定为待投送数据为媒体数据。反之,若不是音频、视频或图片,则可判定为不是媒体数据(即是非媒体数据)。
S102,若待投送数据为媒体数据,则投送端获取自身对待投送数据的第一数据权限,以及接收端对待投送数据的第二数据权限,并比较第一数据权限和第二数据权限的权限高低。
当待投送数据是媒体数据时,为了可以实现对媒体数据最大权限的播放,以使得投屏效果较佳。在本申请实施例中会获取投送端和接收端对待投送数据的数据权限(即第一数据权限和第二数据权限)。其中,投送端可以通过读取自身对待投送数据的数据权限的方式,实现对第一数据权限的获取。例如,在一些可选实施例中,当以“是否具有VIP账号”来判断数据权限时。可以先从投送端已安装的应用程序中,确定出可播放待投送数据的应用程序(即第一应用程序),如某个视频播放器。再获取该应用程序中 的用户账号,并根据该用户账号确定投送端是否有播放待投送数据的数据权限。而针对接收端对待投送数据的第二数据权限,则需投送端向接收端请求相应的数据。
为了实现对第二数据权限的请求,在本申请实施例中,可由投送端向接收端发送待投送数据的相关信息(即第一信息)。并由接收端在接收到该相关信息之后,读取自身对待投送数据的数据权限(即第二数据权限)发送给投送端。该相关信息可以是待投送数据自身的数据属性数据,如数据类型、数据大小以及分辨率等。也可以是待投送数据相关的播放信息。例如当待投送数据对终端设备有安全级别要求,仅终端设备安全级别达到某一预设级别以上才能播放时。此时播放信息可以是安全级别要求。又例如,当待投送数据为视频平台的在线视频时,播放信息可以是该视频平台的视频平台信息,如可以是视频平台的名称或者视频平台应用程序的唯一标识等。以使得接收端可以唯一确定出视频平台,并确定是否具有对应的视频平台VIP账号,或者是否有对应的点播权限等。
相应的,参考图2,对第二数据权限的请求操作可以如下:
S201,投送端向接收端发送待投送数据的第一信息。
S202,接收端在接收到第一信息后,根据第一信息获取自身对待投送数据的第二数据权限,并将第二数据权限发送至投送端。
此时对投送端而言,获取接收端对待投送数据的第二数据权限,可以替换为:发送待投送数据的第一信息至接收端,并接收由接收端针对第一信息返回的第二数据权限。
为了获取接收端对待投送数据的数据权限,本申请实施例中,投送端会将待投送数据的相关信息(即第一信息)发送至接收端。例如,当待投送数据为视频平台的在线视频,且数据权限中仅包含“是否具有VIP账号”时。相关信息可以是播放待投送数据的视频平台信息。接收端在接收到视频平台信息后,读取自身在视频平台中的账号情况(可以通过启动视频平台应用程序等方式实现账号获取),并返回是否是VIP账号的结果给投送端。或者亦可以是在线视频的URL。接收端在接收到URL的时候,根据URL确定对应的视频平台或视频平台应用程序,再获取自身在视频平台中的账号情况,并返回是否是VIP账号的结果给投送端。
又例如,当待投送数据为投送端内的本地媒体数据时(如本地音频、视频或图片),假设该本地媒体数据是机密性较高的数据,要求安全级别较高的终端设备才能进行播放。即数据权限中包含“终端设备的安全级别”。此时相关信息可以包含该本地媒体数据的安全级别要求。例如假设要求为:二级及以上。接收端在接收到安全级别要求之后,将自身的安全级别发送给投送端。或者接收端自行判断自身安全级别是否满足安全级别要求,并返回判断结果至投送端。
应当说明地,根据投屏功能在终端设备(包括投送端和接收端)中存在形式的不同,获取数据权限的方式亦可存在一定的差异。例如在一些可选实施例中,当投屏功能是存在于软件系统中时,则可以读取终端设备软件系统及硬件组件对待投送数据的数据权限。例如软件系统和硬件组件的安全级别。而当投屏功能时存在于应用程序中时,则可以根据需要,获取终端设备软件系统和硬件组件对待投送数据的数据权限,以及应用程序对待投送数据的数据权限中的任意一个或两个数据权限。比如可以获取 两个数据权限,并将获取到的两个数据权限合并,确定为终端设备对待投送数据最终的数据权限。例如,当待投送数据被加密,要求获取终端设备对待投送数据的解密权限时。终端设备可以获取自身软件系统和硬件组件对待投送数据的解密权限,以及自身已安装的应用程序对待投送数据的数据权限,并进行合并。若均为无法解密,则判定为终端设备自身无解密权限。若其中存在一个或两个数据权限为可以解密,则判定为终端设备自身具有解密权限。
在获取到投送端和接收端各自对待投送数据的数据权限之后,本申请实施例会比较两个数据权限的高低,并确定出其中数据权限较高的一端。其中,本申请实施例不对数据权限高低的比较方法进行过多限定,可由技术人员自行设定。例如,在一些可选实施例中,当数据权限中仅包含一项内容时,可以直接比较该项内容的高低。如当数据权限中仅包含“是否具有VIP账号”时,可以直接比较投送端和接收端的VIP账号情况。若均具有VIP账号或者均不具有VIP账号,均可以判定为数据权限相同。若一端具有VIP账号,但另一端不具有,则可以判定为具有VIP账号的一端数据权限较高。而在另一些可选实施例中,当数据权限中包含多项内容时,则可以为不同内容设置权重系数。再逐一比较各项内容之后,再根据权重系数来确定最终数据权限的高低。其中,当各项内容的权重系数相同时,则相当于采用投票法来进行数据权限比较。
S103,若第一数据权限高于第二数据权限,则通过屏幕镜像的方式,将待投送数据投送至接收端。
当第一数据权限高于第二数据权限时,说明投送端的数据权限较高。因此此时本申请实施例会选用屏幕镜像的方式来进行投屏。即由投送端根据自身的数据权限播放待投送数据,例如使用VIP账号来播放在线视频,或者先对已加密的待投送数据进行解密再进行播放。同时对播放待投送数据时的屏幕界面进行截屏录制。再将录制的截屏数据以视频流等方式发送给接收端。相应的,接收端可以通过播放接收到的截屏数据的方式,实现对待投送数据的投屏播放。此时,用户可以在接收端观看待投送数据,并可以在投送端控制对待投送数据的播放操作。例如控制视频播放进度、音频音量或者图片缩放比例等。本申请实施例不对屏幕镜像的操作细节进行过多的限定,可由技术人员根据需求设定。
S104,若第一数据权限低于第二数据权限,则通过DLNA的方式,将待投送数据投送至接收端。
当第二数据权限高于第一数据权限时,说明接收端的数据权限较高。因此此时本申请实施例会选用DLNA的方式来进行投屏。即由投送端将待投送数据的URL发送至接收端。由接收端根据该URL获取待投送数据,并根据自身的数据权限播放待投送数据,例如使用VIP账号来播放在线视频,或者先对已加密的待投送数据进行解密再进行播放。此时,用户可以在接收端观看待投送数据,并可以在接收端端控制对待投送数据的播放操作。例如控制视频播放进度、音频音量或者图片缩放比例等。本申请实施例不对DLNA的操作细节进行过多的限定,可由技术人员根据需求设定。
在本申请实施例中,针对待投送数据是媒体数据的情况,会比较投送端和接收端对待投送数据的数据权限。若投送端权限更高,则采用屏幕镜像的方式进行待投送数据的投屏。此时可以充分使用投送端较高的数据权限来对待投送数据进行播放操作。 而当接收端数据权限较高时,则采用DLNA的方式进行待投送数据的投屏。此时则可以充分使用接收端较高的数据权限来对待投送数据进行播放操作。通过本申请实施例,可以实现对投屏方式的自动选取,并始终为用户提供对待投送数据的较高数据权限。因此实际投屏过程中,可以使用较高的数据权限进行待投送数据的播放,使得出现因数据权限导致待投送数据无法正常播放的可能性大大降低。最终呈现给用户的更为流畅的投屏效果。
作为本申请的一个可选实施例,实际应用中投送端和接收端对待投送数据的数据权限也可能会相同。即S102的结果可能是第一数据权限与第二数据权限相同。此时无论选取哪一端进行待投送数据的播放,理论上权限方面对播放的影响均一样。在此基础上,本申请实施例在S102之后,可以采用屏幕镜像或者DLNA的方式实现待投送数据的投屏。
考虑到实际投屏的应用场景中,若采用屏幕镜像的方式进行投屏,会导致用户需在投送端进行播放操作,且需保持投送端中对待投送数据的播放界面。此时会存在以下几个问题:
1、会导致用户难以正常使用投送端投屏以外的其他功能。例如当投送端为手机,待投送数据为视频时。屏幕镜像会要求用户在手机中保持对视频的播放界面。此时若用户退出播放界面使用其他功能,如使用电话或短信功能。会导致无法正常对视频进行投屏。
2、投送端和接收端可能相距较远,此时用户在空间上的不方便操作投送端。例如当利用卧室的台式电脑对客厅的电视进行投屏时,若用户需要对待投送数据进行暂停或快进等操作,则需要跑到卧室内去操作。因此十分不便。
3、屏幕镜像一般要求投送端的屏幕持续亮屏,此时会导致投送端功耗较高,造成资源浪费。
为了解决上述几个问题,提升投屏的效果以及用户的体验,作为本申请的一个可选实施例,参考图3,在S102之后,还包括:
S105,若第一数据权限与第二数据权限相同,则通过DLNA的方式,将待投送数据投送至接收端。
由于DLNA采用的是推送待投送数据URL的方式实现投屏。因此理论上投送端自身可以不用播放待投送数据。且用户可以将投屏功能放在后台运行,并正常使用投屏功能以外的其他功能。另外DLNA的方式下,接收端可以实现对待投送数据的播放操作。例如音频和视频的暂停、快进和音量调节,以及图片的放大缩小等。因此用户可以在接收端观看待投送数据时,直接操作接收端,而无需再跑到投送端处进行操作。最后,DLNA方式投屏时,可以投送端可以不保持亮屏,因此更加节能省电,减少资源浪费。基于这些理由,本申请实施例会在确定出投送端和接收端的数据权限相同时,会采用DLNA的方式来进行待投送数据的投屏。此时对用户而言,投屏的效果更佳。
相应的,在图1所示实施例的基础上,本申请实施例对应的投屏方法决策表格可以如下表1:
表1
Figure PCTCN2021112885-appb-000001
表1中,会将投送端和接收端的数据权限比较结果分为4种:投送端数据权限较高、投送端和接收端数据权限均较高且相同、接收端数据权限较高以及投送端和接收端数据权限均较低且相同,并设置了相应的投屏方式。实际应用中,可以根据比较结果的情况来确定投屏方式,实现对投屏方式的自动决策。
在本申请实施例中,S104和S105可以合并为:若第一数据权限低于或等于第二数据权限,则通过DLNA的方式,将待投送数据投送至接收端。
作为本申请的另一个可选实施例,实际应用中投送端和接收端对待投送数据的数据权限可能会相同。即S102的结果可能是第一数据权限与第二数据权限相同。此时无论选取哪一端进行待投送数据的播放,理论上权限方面对播放的影响均一样。实际应用中发现,除数据权限外,投送端对媒体数据的解码能力,也会极大地影响对媒体数据的播放效果。如是否卡顿以及清晰度如何等。在投屏场景之中,即会影响最终对媒体数据的投屏效果。因此在两端数据权限相同的情况下,为了使得最终可以实现对待投送数据较好的解码播放,本申请实施例会继续比较投送端和接收端对待投送数据的解码能力。参考图4,在S102之后,还包括:
S106,若第一数据权限与第二数据权限相同,则获取投送端对待投送数据的第一解码能力,以及接收端对待投送数据的第二解码能力,并比较第一解码能力和第二解码能力的高低。
实际应用中,解码分为硬件解码和软件解码。其中,软件解码是指利用CPU进行媒体数据解码,需要消耗CPU的运算资源。硬件解码是利用CPU以外的其他硬件实现对媒体数据的解码。例如使用GPU或者硬件解码器等实现媒体数据的解码。
为了可以更好地对待投送数据进行解码,使得最终可以呈现更好地投屏效果给用户。在本申请实施例中,会获取投送端和接收端两端对待投送数据的解码能力(即第一解码能力和第二解码能力),并会比较两者的高低。其中,第一解码能力可由投送端读取对待投送数据类型数据的硬件解码能力和软件解码能力得到。例如,当待投送数据的类型是视频时,投送端读取自身对视频支持的解码能力,如1080P和4k。对于第二解码能力,则需要接收端根据待投送数据的数据类型,来读取自身对待投送数据的硬件解码能力和软件解码能力,得到最终的解码能力并反馈该投送端。为了使得接收端获取到待投送数据的类型,可以有投送端将待投送数据的类型发送给接收端。其中,在与图2所示实施例进行结合应用时,若接收端可以通过第一信息确定出待投送数据的类型。例如第一信息中带有待投送数据的类型,或者第一信息为URL,接收端以通过URL确定出待投送数据的类型。则此时无需再发送待投送数据的类型给接收端。
在一些可选实施例中,考虑到硬件解码能力和软件解码能力有时难以都获取到。因此在投送端和接收端获取自身解码能力时,也可以仅获取硬件解码能力或者软件解 码能力。具体可由技术人员根据实际情况设定,此处不做限定。
S107,若第一解码能力高于第二解码能力,则对待投送数据进行解码,并将解码后的待投送数据通过屏幕镜像的方式投送至接收端。
当第一解码能力高于第二解码能力时,说明投送端具有对待投送数据更高的解码能力。例如假设待投送数据的类型是视频,同时假设投送端同时支持对视频的1080P解码播放和4k解码播放,而接收端仅支持对视频的1080P解码播放。此时投送端具有较高的4k解码能力。此时使用投送端进行待投送数据的解码播放,理论上其解码播放时的流畅度和清晰度等指标,会高于解码能力较弱的接收端。因此本申请实施例会选用屏幕镜像的方式进行待投送数据的投屏。即由投送端利用自身的解码能力对待投送数据进行解码播放,并在播放的同步进行屏幕录制和传输。具体的屏幕镜像投屏说明,可参考S103中的说明,此处不予赘述。
S108,若第一解码能力低于第二解码能力,则通过DLNA的方式,将待投送数据投送至接收端。
当第二解码能力高于第一解码能力时,说明接收端具有对待投送数据更高的解码能力。例如假设待投送数据的类型是视频,同时假设投送端仅支持对视频的1080P解码播放,但而接收端同时支持对视频的1080P解码播放和4k解码播放。此时接收端具有较高的4k解码能力。此时使用接收端进行待投送数据的解码播放,理论上其解码播放时的流畅度和清晰度等指标,会高于解码能力较弱的投送端。因此本申请实施例会选用DLNA的方式进行待投送数据的投屏。即由接收端端利用自身的解码能力对待投送数据进行解码播放。其中,具体的DLNA投屏说明,可参考S104中的说明,此处不予赘述。
作为本申请的一个可选实施例,为了实现对解码能力的有效量化和比较,参考图5,在本申请实施例中,S106可以被替换为:
S1061,若第一数据权限与第二数据权限相同,则获取投送端对待投送数据的第一解码质量,以及接收端对待投送数据的第二解码质量,并比较第一解码质量和第二解码质量的高低。
在本申请实施例中,解码质量(包括第一解码质量和第二解码质量),是指终端设备(包括投送端和接收端)对待投送数据类型的数据,以最高解码能力进行解码播放时支持的最高播放质量。是对解码能力的一种量化表征方式。以一实例进行说明,假设待投送数据的类型是视频,同时假设终端设备同时支持对视频的1080P解码播放和4k解码播放。此时若利用终端设备最高解码能力进行视频解码播放,理论上支持视频最高播放质量为4k。因此此时的终端设备解码质量即为4k。在本申请实施例中,接收端只需返回对待投送数据的解码质量(即第二解码质量)即可。
相应的,S107和S108可以被替换为:
S1071,若第一解码质量高于第二解码质量,则对待投送数据进行解码,并将解码后的待投送数据通过屏幕镜像的方式投送至接收端。
S1081,若第一解码质量低于第二解码质量,则通过DLNA的方式,将待投送数据投送至接收端。
作为本申请的一个可选实施例,考虑到投送端和接收端的解码能力也可能会相同。 此时,为了有更好的投屏效果,本申请实施例会优先采用DLNA的方式来进行投屏,即在S106之后还包括:
S109,若第一解码能力与第二解码能力相同,则采用通过DLNA的方式,将待投送数据投送至接收端。
当第一解码能力与第二解码能力相同时,理论上采用投送端和接收端播放待投送数据的显示基本相同。但屏幕镜像和DLNA对于用户实际投屏过程中的操作体验可能会有较大差异,因此为了提升整体投屏的效果,方便用的操作。本申请实施例会采用DLNA的方式来进行投屏。其中,具体的选取原因和有益效果等说明,可以参考图3所示实施例内容说明,此处不予赘述。
对应于图5所示实施例,此时S109可以被替换为:若第一解码质量与第二解码质量相同,则采用通过DLNA的方式,将待投送数据投送至接收端。
此时可以与S1081进行合并,得到:若第一解码质量低于第二解码质量,或者第一解码质量与第二解码质量相同,则通过DLNA的方式,将待投送数据投送至接收端。
相应的,本申请实施例中,在投送端和接收端对待投送数据的数据权限相同的基础上,对应的投屏方法决策表格可以如下表2:
表2
Figure PCTCN2021112885-appb-000002
表2中,会将投送端和接收端的解码质量比较结果分为4种:投送端解码质量较高、投送端和接收端解码质量均较高且相同、接收端解码质量较高以及投送端和接收端解码质量均较低且相同,并设置了相应的投屏方式。实际应用中,可以根据比较结果的情况来确定投屏方式,实现对投屏方式的自动决策。
在本申请实施例中,通过先比较投送端和接收端对待投送数据的数据权限。在数据权限相同的情况下,再比较两者对待投送数据的解码能力。若投送端解码能力更强,则采用屏幕镜像的方式进行投屏。此时可以充分利用投送端较强的解码能力来进行待投送数据的解码播放。而在接收端解码能力更强时,则选用DLNA的方式来进行投屏,此时可以充分利用接收端较强的解码能力来进行待投送数据的解码播放。通过本申请实施例,可以实现在数据权限相同的情况下对投放方式的自动选取,并始终为用户提供对待投屏数据较强的解码能力。因此在实际投屏过程中,用户可以看到在较强解码能力下对待投送数据的播放效果。防止了低解码能力对待投送数据解码不流畅甚至出错的情况。使得整个投屏的效果更为清晰流畅。因此可以实现更好的投屏效果,提升用户体验。另外,通过先比较数据权限再比较解码能力的方式。可以首先保障对待投送数据的正常播放。再选用更加适宜的解码操作,使得整个投屏的过程效果更佳。因此,本申请实施例可以实现对投屏方式的自适应选取,实现更好的投屏效果。
作为本申请的一个可选实施例,考虑到实际应用中,待投送数据也可能是非媒体 数据。例如文档和游戏界面等。这些非媒体数据无法采用DLNA的方式进行,因此在本申请实施例中,会采用屏幕镜像的方式进行投屏。参考图6,本申请实施例包括:
S110,若待投送数据不为媒体数据,则通过屏幕镜像的方式,将待投送数据投送至接收端。
当用户需要进行游戏或桌面等界面投屏,或者需要进行文档等投屏时。本申请实施例会自动选用屏幕镜像的方式,对游戏、桌面或文档等界面进行屏幕录制,并将录制的截屏数据以视频流等方式发送给接收端,以实现投屏。其中,屏幕镜像的投屏方式说明,可参考S103中的说明,此处不予赘述。
对应于上文实施例的投屏方法,图7示出了本申请实施例提供的投屏装置的结构示意图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图7,该投屏装置包括:
数据确定模块71,用于在投屏功能被启动时,确定待投送数据。
权限获取模块72,用于在待投送数据为媒体数据时,获取投送端对待投送数据的第一数据权限,以及接收端对待投送数据的第二数据权限。
镜像投屏模块73,用于在第一数据权限高于第二数据权限时,通过屏幕镜像的方式,将待投送数据投送至接收端。
数字投屏模块74,用于在第一数据权限低于第二数据权限时,通过DLNA的方式,将待投送数据投送至接收端。
作为本申请的一个实施例,镜像投屏模块73,还用于:
在待投送数据不为媒体数据时,通过屏幕镜像的方式,将待投送数据投送至接收端。
作为本申请的一个实施例,数字投屏模块74,还用于:
在第一数据权限与第二数据权限相同时,通过DLNA的方式,将待投送数据投送至接收端。
作为本申请的一个实施例,该投屏装置,还包括:
解码能力获取模块,用于在第一数据权限与第二数据权限相同时,获取投送端对待投送数据的第一解码质量,以及接收端对待投送数据的第二解码质量。
镜像投屏模块73,还用于在第一解码质量高于第二解码质量时,通过屏幕镜像的方式,将待投送数据投送至接收端。
数字投屏模块74,用于在第一解码质量低于第二解码质量时,通过数字生活网络联盟的方式,将待投送数据投送至接收端。
作为本申请的一个实施例,数字投屏模块74,还用于:
在第一解码质量与第二解码质量相同时,通过DLNA的方式,将待投送数据投送至接收端。
作为本申请的一个实施例,权限获取模块72,包括:
程序确定模块,用于从投送端已安装的应用程序中确定出可以播放待投送数据的第一应用程序。
权限获取子模块,用于获取第一应用程序中的用户账号,并根据用户账号确定第一数据权限。
作为本申请的一个实施例,权限获取模块72,包括:
信息发送模块,用于向接收端发送待投送数据的第一信息。
权限接收模块,用于接收接收端针对第一信息返回的第二数据权限。
本申请实施例提供的投屏装置中各模块实现各自功能的过程,具体可参考前述图1至图6所示实施例以及其他相关方法实施例的描述,此处不再赘述。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。还应理解的是,虽然术语“第一”、“第二”等在文本中在一些本申请实施例中用来描述各种元素,但是这些元素不应该受到这些术语的限制。这些术语只是用来将一个元素与另一元素区分开。例如,第一表格可以被命名为第二表格,并且类似地,第二表格可以被命名为第一表格,而不背离各种所描述的实施例的范围。第一表格和第二表格都是表格,但是它们不是同一表格。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例提供的投屏方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等投送端上,本申请实施例对投送端的具体类型不作任何限制。
例如,所述投送端可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、车联网终端、电脑、膝上型计算机、手持式通信设备、手持式计算设备、电视顶盒(set top box,STB)、用户驻地设备(customer premise equipment,CPE)和/或用于在无线系统上进行通信的其它设备以及下一代通信系统,例如,5G网络中的终端设备或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的终端设备等。
作为示例而非限定,当所述投送端为可穿戴设备时,该可穿戴设备还可以是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,如智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。
下文以投送端是手机为例,图8示出了手机100的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。其中传感器模块180可以包括陀螺仪传感器180A,加速度传感器180B,气压传感器180C,磁传感器180D,环境光传感器180E,接近光传感器180G、指纹传感器180H,温度传感器180J,触摸传感器180K(当然,手机100还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、气压传感器、骨传导传感器等,图中未示出)。
可以理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使 用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
处理器110可以运行本申请实施例提供的投屏方法,以便于丰富投屏功能,提升投屏的灵活度,提升用户的体验。处理器110可以包括不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的投屏方法,比如投屏方法中部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。显示屏194可用于显示由用户输入的信息或提供给用户的信息以及各种图形用户界面(graphical user interface,GUI)。例如,显示器194可以显示照片、视频、网页、或者文件等。再例如,显示器194可以显示图形用户界面。其中图形用户界面上包括状态栏、可隐藏的导航栏、时间和天气小组件(widget)、以及应用的图标,例如浏览器图标等。状态栏中包括运营商名称(例如中国移动)、移动网络(例如4G)、时间和剩余电量。导航栏中包括后退(back)键图标、主屏幕(home)键图标和前进键图标。此外,可以理解的是,在一些实施例中,状态栏中还可以包括蓝牙图标、Wi-Fi图标、外接设备图标等。还可以理解的是,在另一些实施例中,图形用户界面中还可以包括Dock栏,Dock栏中可以包括常用的应用图标等。当处理器检测到用户的手指(或触控笔等)针对某一应用图标的触摸事件后,响应于该触摸事件,打开与该应用图标对应的应用的用户界面,并在显示器194上显示该应用的用户界面。
在本申请实施例中,显示屏194可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接显示屏。当处理器110运行本申请实施例提供的投屏方法后,处理器110可以控制外接的音频输出设备切换输出的音频信号。
摄像头193(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储手机100使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。
内部存储器121还可以存储本申请实施例提供的投屏方法对应的一个或多个计算 机程序1210。该一个或多个计算机程序1210被存储在上述存储器121中并被配置为被该一个或多个处理器110执行,该一个或多个计算机程序1210包括指令,上述指令可以用于执行如图1至图6相应实施例中的各个步骤,该计算机程序1210可以包括帐号验证模块1211、优先级比较模块1212。其中,帐号验证模块1211,用于对局域网内的其它投送端的系统认证帐号进行认证;优先级比较模块1212,可用于比较音频输出请求业务的优先级和音频输出设备当前输出业务的优先级。状态同步模块1213,可用于将投送端当前接入的音频输出设备的设备状态同步至其它投送端,或者将其它设备当前接入的音频输出设备的设备状态同步至本地。当内部存储器121中存储的投屏方法的代码被处理器110运行时,处理器110可以控制投送端进行投屏数据处理。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的投屏方法的代码还可以存储在外部存储器中。这种情况下,处理器110可以通过外部存储器接口120运行存储在外部存储器中的投屏方法的代码,处理器110可以控制投送端进行投屏数据处理。
下面介绍传感器模块180的功能。
陀螺仪传感器180A,可以用于确定手机100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180A确定手机100围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器180A可以用于检测手机100当前的运动状态,比如抖动还是静止。
当本申请实施例中的显示屏为可折叠屏时,陀螺仪传感器180A可用于检测作用于显示屏194上的折叠或者展开操作。陀螺仪传感器180A可以将检测到的折叠操作或者展开操作作为事件上报给处理器110,以确定显示屏194的折叠状态或展开状态。
加速度传感器180B可检测手机100在各个方向上(一般为三轴)加速度的大小。即陀螺仪传感器180A可以用于检测手机100当前的运动状态,比如抖动还是静止。当本申请实施例中的显示屏为可折叠屏时,加速度传感器180B可用于检测作用于显示屏194上的折叠或者展开操作。加速度传感器180B可以将检测到的折叠操作或者展开操作作为事件上报给处理器110,以确定显示屏194的折叠状态或展开状态。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机通过发光二极管向外发射红外光。手机使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机附近有物体。当检测到不充分的反射光时,手机可以确定手机附近没有物体。当本申请实施例中的显示屏为可折叠屏时,接近光传感器180G可以设置在可折叠的显示屏194的第一屏上,接近光传感器180G可根据红外信号的光程差来检测第一屏与第二屏的折叠角度或者展开角度的大小。
陀螺仪传感器180A(或加速度传感器180B)可以将检测到的运动状态信息(比如角速度)发送给处理器110。处理器110基于运动状态信息确定当前是手持状态还是脚架状态(比如,角速度不为0时,说明手机100处于手持状态)。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
示例性的,手机100的显示屏194显示主界面,主界面中包括多个应用(比如相机应用、微信应用等)的图标。用户通过触摸传感器180K点击主界面中相机应用的图标,触发处理器110启动相机应用,打开摄像头193。显示屏194显示相机应用的界面,例如取景界面。
手机100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。在本申请实施例中,移动通信模块150还可以用于与其它投送端进行信息交互,即向其它投送端发送投屏相关数据,或者移动通信模块150可用于接收投屏请求,并将接收的投屏请求封装成指定格式的消息。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波 处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。本申请实施例中,无线通信模块160可以用于接入接入点设备,向其它投送端发送和接收消息。
另外,手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机100可以接收按键190输入,产生与手机100的用户设置以及功能控制有关的键信号输入。手机100可以利用马达191产生振动提示(比如来电振动提示)。手机100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。手机100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
应理解,在实际应用中,手机100可以包括比图8所示的更多或更少的部件,本申请实施例不作限定。图示手机100仅是一个范例,并且手机100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在投送端上运行时,使得投送端执行时可实现上述各个方法实施例中的步骤。
本申请实施例还提供了一种芯片系统,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述各个方法实施例中的步骤。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单 元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使对应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种投屏方法,其特征在于,包括:
    若投屏功能被启动,投送端确定待投送数据;
    若所述待投送数据为媒体数据,则所述投送端获取自身对所述待投送数据的第一数据权限,以及接收端对所述待投送数据的第二数据权限;
    若所述第一数据权限高于所述第二数据权限,则所述投送端通过屏幕镜像的方式,将所述待投送数据投送至所述接收端;
    若所述第一数据权限低于所述第二数据权限,则所述投送端通过数字生活网络联盟的方式,将所述待投送数据投送至所述接收端。
  2. 根据权利要求1所述的投屏方法,其特征在于,还包括:
    若所述待投送数据不为媒体数据,则所述投送端通过屏幕镜像的方式,将所述待投送数据投送至所述接收端。
  3. 根据权利要求1或2所述的投屏方法,其特征在于,还包括:
    若所述第一数据权限与所述第二数据权限相同,则所述投送端通过数字生活网络联盟的方式,将所述待投送数据投送至接收端。
  4. 根据权利要求1或2所述的投屏方法,其特征在于,还包括:
    若所述第一数据权限与所述第二数据权限相同,则所述投送端获取自身对所述待投送数据的第一解码质量,以及接收端对所述待投送数据的第二解码质量;
    若所述第一解码质量高于所述第二解码质量,则所述投送端通过屏幕镜像的方式,将所述待投送数据投送至所述接收端;
    若所述第一解码质量低于所述第二解码质量,则所述投送端通过数字生活网络联盟的方式,将所述待投送数据投送至所述接收端。
  5. 根据权利要求4所述的投屏方法,其特征在于,还包括:
    若所述第一解码质量与所述第二解码质量相同,则所述投送端通过数字生活网络联盟的方式,将所述待投送数据投送至接收端。
  6. 根据权利要求1或2所述的投屏方法,其特征在于,获取所述第一数据权限的操作,包括:
    所述投送端从已安装的应用程序中确定出可以播放所述待投送数据的第一应用程序;
    获取所述第一应用程序中的用户账号,并根据所述用户账号确定所述第一数据权限。
  7. 根据权利要求1或2所述的投屏方法,其特征在于,获取所述第二数据权限的操作,包括:
    所述投送端向所述接收端发送所述待投送数据的第一信息;
    所述投送端接收所述接收端针对所述第一信息返回的所述第二数据权限。
  8. 一种投屏装置,其特征在于,包括:
    数据确定模块,用于在投屏功能被启动时,确定待投送数据;
    权限获取模块,用于在所述待投送数据为媒体数据时,获取投送端对所述待投送数据的第一数据权限,以及接收端对所述待投送数据的第二数据权限;
    镜像投屏模块,用于在所述第一数据权限高于所述第二数据权限时,通过屏幕镜像的方式,将所述待投送数据投送至所述接收端;
    数字投屏模块,用于在所述第一数据权限低于所述第二数据权限时,通过数字生活网络联盟的方式,将所述待投送数据投送至所述接收端。
  9. 一种投送端,其特征在于,所述投送端包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现根据权利要求1至7任一项所述方法的步骤。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现根据权利要求1至7任一项所述方法的步骤。
  11. 一种芯片系统,其特征在于,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现如权利要求1至7任一项所述的投屏方法。
PCT/CN2021/112885 2020-08-28 2021-08-17 投屏方法、装置及投送端 WO2022042364A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21860202.7A EP4199431A4 (en) 2020-08-28 2021-08-17 SCREEN PROJECTION METHOD AND APPARATUS, AND PROJECTION TERMINAL
US18/043,296 US20240015350A1 (en) 2020-08-28 2021-08-17 Screen Projection Method and Apparatus, and Project End

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010892847.0 2020-08-28
CN202010892847.0A CN114125513B (zh) 2020-08-28 2020-08-28 投屏方法、装置及投送端

Publications (1)

Publication Number Publication Date
WO2022042364A1 true WO2022042364A1 (zh) 2022-03-03

Family

ID=80354592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/112885 WO2022042364A1 (zh) 2020-08-28 2021-08-17 投屏方法、装置及投送端

Country Status (4)

Country Link
US (1) US20240015350A1 (zh)
EP (1) EP4199431A4 (zh)
CN (1) CN114125513B (zh)
WO (1) WO2022042364A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827690A (zh) * 2022-03-30 2022-07-29 北京奇艺世纪科技有限公司 一种网络资源显示方法、装置及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209213B (zh) * 2022-08-23 2023-01-20 荣耀终端有限公司 一种无线投屏方法及移动设备
CN116610274B (zh) * 2023-05-09 2023-11-24 北京元心君盛科技有限公司 跨设备投屏方法、装置、电子设备及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138728A1 (en) * 2011-11-25 2013-05-30 Lg Electronics Inc. Mobile device, display device and method for controlling the same
CN103338139A (zh) * 2013-06-18 2013-10-02 华为技术有限公司 多屏互动方法、装置及终端设备
CN103534679A (zh) * 2012-12-12 2014-01-22 华为终端有限公司 媒体流共享的方法及终端
CN105530280A (zh) * 2014-10-23 2016-04-27 中兴通讯股份有限公司 内容分享的方法及装置
CN105991682A (zh) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 一种数据分享方法及装置
CN106792125A (zh) * 2015-11-24 2017-05-31 腾讯科技(深圳)有限公司 一种视频播放方法及其终端、系统
CN109542377A (zh) * 2018-11-16 2019-03-29 深圳时空数字科技有限公司 智能设备、显示设备、存储设备及屏显互动控制方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5623111B2 (ja) * 2010-04-01 2014-11-12 船井電機株式会社 携帯情報処理装置
US9961265B2 (en) * 2014-10-06 2018-05-01 Shafiq Ahmad Chaudhry Method for capturing and storing historic audiovisual data via a digital mirror
US10326822B2 (en) * 2015-12-03 2019-06-18 Google Llc Methods, systems and media for presenting a virtual operating system on a display device
KR102628856B1 (ko) * 2017-01-04 2024-01-25 삼성전자주식회사 전자 장치 간 콘텐츠 공유 시스템 및 전자 장치의 콘텐츠 공유 방법

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130138728A1 (en) * 2011-11-25 2013-05-30 Lg Electronics Inc. Mobile device, display device and method for controlling the same
CN103534679A (zh) * 2012-12-12 2014-01-22 华为终端有限公司 媒体流共享的方法及终端
CN103338139A (zh) * 2013-06-18 2013-10-02 华为技术有限公司 多屏互动方法、装置及终端设备
CN105530280A (zh) * 2014-10-23 2016-04-27 中兴通讯股份有限公司 内容分享的方法及装置
CN105991682A (zh) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 一种数据分享方法及装置
CN106792125A (zh) * 2015-11-24 2017-05-31 腾讯科技(深圳)有限公司 一种视频播放方法及其终端、系统
CN109542377A (zh) * 2018-11-16 2019-03-29 深圳时空数字科技有限公司 智能设备、显示设备、存储设备及屏显互动控制方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4199431A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827690A (zh) * 2022-03-30 2022-07-29 北京奇艺世纪科技有限公司 一种网络资源显示方法、装置及系统
CN114827690B (zh) * 2022-03-30 2023-07-25 北京奇艺世纪科技有限公司 一种网络资源显示方法、装置及系统

Also Published As

Publication number Publication date
CN114125513B (zh) 2022-10-11
EP4199431A4 (en) 2024-01-03
CN114125513A (zh) 2022-03-01
US20240015350A1 (en) 2024-01-11
EP4199431A1 (en) 2023-06-21

Similar Documents

Publication Publication Date Title
WO2020238874A1 (zh) 一种vr多屏显示方法及电子设备
WO2022042364A1 (zh) 投屏方法、装置及投送端
US20200082782A1 (en) Mobile computing device technology and systems and methods utilizing the same
WO2022100239A1 (zh) 设备协作方法、装置、系统、电子设备和存储介质
WO2021164445A1 (zh) 一种通知处理方法、电子设备和系统
CN108833963B (zh) 显示界面画面的方法、计算机设备、可读存储介质和系统
CN108966008B (zh) 直播视频回放方法及装置
WO2021147406A1 (zh) 一种音频输出方法及终端设备
CN111221845A (zh) 一种跨设备信息搜索方法及终端设备
WO2022105445A1 (zh) 基于浏览器的应用投屏方法及相关装置
CN112527174B (zh) 一种信息处理方法及电子设备
WO2022052791A1 (zh) 一种多媒体流的播放方法和电子设备
US9215003B2 (en) Communication apparatus, communication method, and computer readable recording medium
CN112527222A (zh) 一种信息处理方法及电子设备
WO2021238967A1 (zh) 一种内容分享的方法、装置及系统
US20240094972A1 (en) Page Display Method and Apparatus, Electronic Device, and Readable Storage Medium
WO2022062998A1 (zh) 一种设备推荐方法及设备
WO2022048453A1 (zh) 解锁方法及电子设备
WO2021227942A1 (zh) 一种分享信息的方法、电子设备和系统
CN112911337B (zh) 用于配置终端设备的视频封面图片的方法和装置
US20220414178A1 (en) Methods, apparatuses and systems for displaying alarm file
CN109714628B (zh) 播放音视频的方法、装置、设备、存储介质及系统
WO2022267974A1 (zh) 一种投屏方法及相关装置
WO2022165939A1 (zh) 一种跨设备认证方法及电子设备
CN115686401A (zh) 一种投屏方法、电子设备及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860202

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18043296

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2021860202

Country of ref document: EP

Effective date: 20230313

NENP Non-entry into the national phase

Ref country code: DE