WO2021164631A1 - 投屏方法及终端设备 - Google Patents

投屏方法及终端设备 Download PDF

Info

Publication number
WO2021164631A1
WO2021164631A1 PCT/CN2021/076126 CN2021076126W WO2021164631A1 WO 2021164631 A1 WO2021164631 A1 WO 2021164631A1 CN 2021076126 W CN2021076126 W CN 2021076126W WO 2021164631 A1 WO2021164631 A1 WO 2021164631A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
receiving end
interface
projected
real
Prior art date
Application number
PCT/CN2021/076126
Other languages
English (en)
French (fr)
Inventor
周星辰
邵天雨
居然
李春东
金伟
马伟
王海军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US17/801,005 priority Critical patent/US11748054B2/en
Priority to EP21757584.4A priority patent/EP4095671A4/en
Publication of WO2021164631A1 publication Critical patent/WO2021164631A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • This application belongs to the field of screen projection technology, and in particular relates to a screen projection method and terminal equipment.
  • the terminal equipment in the projection system can be divided into a sending end and a receiving end.
  • the sending end will mirror its entire screen to the corresponding receiving end, so as to realize the sharing of the screen content of the sending end.
  • the screen mirroring method will share all the content on the screen of the sender, which has a single function and low flexibility, which cannot meet the individual needs of users.
  • the embodiments of the present application provide a screen projection method and terminal device, which can solve the problem of low flexibility of the existing screen projection technology.
  • the first aspect of the embodiments of the present application provides a screen projection method, which is applied to the sending end, and includes:
  • the screen projection instruction obtain the real-time interface of the application to be screened and the device information of one or more receiving ends. And obtain the first to-be-projected data corresponding to each receiving end from the real-time interface according to the device information. The first to-be-projected data is then sent to the corresponding receiving end, so that each receiving end outputs the received first to-be-projected data.
  • the first data to be projected is a video stream, an audio stream and/or a user interface control.
  • the required screen projection data can be obtained from the real-time interface of the application program to be screened at the sending end according to the device information of the receiving end.
  • the embodiments of the present application can realize the flexible selection and projection of single or multiple data to be projected, and realize the adaptive projection of each receiving end. This makes the screen projection mode of the embodiment of the present application more flexible and can meet the personalized needs of users.
  • the sending end includes:
  • the screen projection instruction obtain the real-time interface of the application to be screened and the device information of one or more receiving ends.
  • the device information the visual effects, sound effects, and interaction complexity of each receiving end are scored, and the user experience points of each receiving end are obtained.
  • the first to-be-projected data corresponding to each receiving end is obtained from the real-time interface, where the first to-be-projected data includes at least one of a video stream, an audio stream, and a user interface control.
  • the first to-be-projected data includes a user interface control
  • a control layout file for the user interface control is acquired.
  • the first to-be-projected data and the control layout file are sent to the corresponding receiving end.
  • the first to-be-projected data is used for the receiving end for data output, and the control layout file is used for the receiving end to generate a display interface containing user interface controls.
  • the embodiments of the present application can implement the user experience evaluation of the receiving end according to the three dimensions of the receiving end's visual effect, sound effect, and interaction complexity, and can implement the selection of the screen data to be projected according to the evaluated user experience score. Finally, the automatic layout of the user interface controls can also be realized. Therefore, the embodiments of the present application can achieve a better screen projection experience for the user and meet the personalized needs of the user.
  • obtaining the first to-be-projected data corresponding to each receiving end from the real-time interface specifically includes:
  • the second to-be-projected data contained in the real-time interface is extracted, and the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data according to the device information.
  • the second to-be-projected data includes a video stream, an audio stream and/or a user interface control.
  • screening the data to be projected based on the device information of the receiving end can make the final projected data content more suitable for the actual device situation of the receiving end.
  • the final projected content can be more suitable for display at the receiving end, which can improve the user's human-computer interaction efficiency and improve user experience.
  • acquiring, from the real-time interface according to the user experience score, the first to-be-projected data corresponding to each receiving end specifically includes:
  • the second to-be-projected data contained in the real-time interface is extracted, where the second to-be-projected data includes at least one of a video stream, an audio stream, and a user interface control. According to the user experience score, the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data.
  • the screen data to be screened is filtered according to the device information of the receiving end, and different data to be screened are sent to different receiving ends, so that the final screened data content can be more suitable for the actual device of the receiving end.
  • the final projected content can be more suitable for display at the receiving end, which can improve the user's human-computer interaction efficiency and improve user experience.
  • the device information includes: the display screen size and the distortion degree and frequency response of the audio output.
  • the operation of filtering the first to-be-projected data corresponding to each receiving end from the second to-be-projected data according to the device information specifically includes:
  • each second to-be-projected data Process each second to-be-projected data to obtain the corresponding data interaction score.
  • the data interaction score of each second to-be-projected data is matched based on the user experience score to obtain the first to-be-projected data corresponding to each receiving end.
  • each second to-be-projected data is processed to obtain the corresponding data interaction score.
  • the data interaction score of each second to-be-projected data is matched based on the user experience score to obtain the first to-be-projected data corresponding to each receiving end.
  • the screen size and audio output quality of the receiving end are used to match the data to be projected, which guarantees the ultimate user experience at the receiving end.
  • the user interface controls in the first to-be-projected data are first typeset based on the device information. And get the corresponding control layout file.
  • the operation of sending the first to-be-projected data to the corresponding receiving end so that each of the receiving ends output the received first to-be-projected data specifically includes:
  • the first to-be-projected data and the control layout file are sent to the corresponding receiving end, so that the receiving end generates a corresponding display interface according to the received first to-be-projected data and the control layout file.
  • the display effect of the user interface controls in the display interface of the receiving end can be guaranteed.
  • the efficiency of the user's human-computer interaction at the receiving end is guaranteed.
  • the application brand characteristics can also be retained to the greatest extent. It is convenient for the user to quickly become familiar with the display interface of the receiving end, thereby making the human-computer interaction operation of the user at the receiving end more convenient.
  • the third possible implementation of the first aspect includes:
  • the drawing instruction and layer data corresponding to each user interface control in the first to-be-projected data are obtained, and the drawing instruction is used to make the receiving end draw the user interface control.
  • sending the first to-be-projected data and the control layout file to the corresponding receiving end, so that the receiving end generates the corresponding display interface operation according to the received first to-be-projected data and the control layout file including:
  • the drawing instruction, layer data and control layout file are sent to the corresponding receiving end, so that the receiving end draws the corresponding user interface control according to the drawing instruction, layer data and control layout file.
  • the receiving end by sending the drawing instruction and layer data to the receiving end, the receiving end can realize accurate drawing of the user interface control, which ensures the accuracy and reliability of the finally generated user interface control.
  • the fourth possible implementation of the first aspect includes:
  • the method further includes: obtaining a selection instruction input by the user, and obtaining one or more applications to be projected according to the selection instruction.
  • users can flexibly select screen projection applications according to their actual needs, so that the screen projection functions are more abundant and flexible.
  • the operation of extracting the second to-be-projected data contained in the real-time interface specifically includes:
  • the extracted real-time interface contains the second to-be-projected data.
  • the aforementioned data extraction operation to be projected is performed only when the real-time interface is a preset basic interface. It avoids that too many interface displays affect the user's normal use of the screen projection function.
  • the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data according to the device information Among the operations, the filtering operations on a single receiving end include:
  • each data set to be projected is not an empty set, and each of the data sets to be projected There is no intersection between them.
  • each data set to be projected is matched, and the second data to be projected included in the successfully matched data set to be projected is used as the first data to be projected corresponding to the receiving end.
  • the data to be projected to be displayed to be displayed is combined in different ways.
  • the receiver equipment information is matched.
  • Adaptive matching of the display data of various receiving end devices can be realized, so that the embodiments of the present application are more compatible with the receiving end.
  • the data content included in each data set to be projected can be preset, thus making the projecting effect of the embodiment of the present application richer and more flexible.
  • the typesetting operations on the user interface controls specifically include:
  • the second typesetting type corresponding to the first typesetting type is determined.
  • the corresponding relationship of the layout style of the user interface control suitable for human-computer interaction is selected for various receiving ends in advance. Then, according to the layout style of the user interface controls in the real-time interface, a suitable layout style for the receiving end is determined. Finally, the corresponding control layout file is generated with a more suitable layout style for the receiving end. This guarantees the final presentation mode of the user interface controls in the display interface of the receiving end, which can be more adapted to the needs of different receiving ends for human-computer interaction. The efficiency of human-computer interaction is improved.
  • the eighth possible implementation of the first aspect includes:
  • the second coordinate information and event type sent by the receiving end and execute the corresponding event task according to the second coordinate information and event type, where the event type is the type identification of the operation event by the receiving end after detecting the operation event Obtained, the second coordinate information is obtained by the receiving end after obtaining the first coordinate information of the operation event on the receiving end screen and performing coordinate conversion processing on the first coordinate information.
  • the receiving end converts the coordinate information and recognizes the event type
  • the sending end executes the corresponding event task according to the converted coordinate information and the event type, thereby realizing the counter-control of the receiving end to the sending end.
  • the user does not need to touch the sending end during the screen casting process, and can also realize the operation on the sending end. It greatly improves the efficiency of human-computer interaction between the user and the sending end, and at the same time makes the projection function more abundant and more flexible.
  • the matching operation on the first to-be-projected data specifically includes:
  • the user interface controls included in the video stream and the real-time interface are divided into one or more data sets, where each data set contains a video stream or at least one user interface control, and there is no intersection in each data set.
  • the user experience score is calculated for each data set to obtain the corresponding collective experience score.
  • the user experience score calculation is performed on the device information of the receiving end to obtain the corresponding device experience score.
  • the collection experience service and the equipment experience score are matched, and m data sets corresponding to the receiving end are determined. Among them, m is a natural number.
  • the data set to be displayed to be displayed is divided into data sets. Then, according to the impact of each data set on the user experience, and the impact of each receiving end on the user experience, the data set corresponding to each receiving end is filtered out. As a result, the data to be projected to be displayed on each receiving end is the data with better user experience, thereby ensuring the final user experience effect.
  • the first to-be-projected data further includes: to-be-notified data and/or somatosensory data.
  • the embodiments of the present application can support screen projection of more types of user experience design elements, so that the screen projection effect is better, the function richness is improved, and the function is more flexible.
  • the eleventh possible implementation manner of the first aspect includes:
  • the display state of the screened user interface controls changes.
  • the sending end obtains the changed attribute data of the user interface control, and sends the changed attribute data to the corresponding receiving end. This allows the receiving end to update the display state of the user interface controls in the display interface according to the received attribute data.
  • the embodiments of the present application can implement real-time update of the display state of the user interface controls, and ensure the real-time performance of screen projection.
  • the twelfth possible implementation manner of the first aspect includes:
  • the display status of the projected dynamic picture changes.
  • the sending end obtains the new picture data corresponding to the dynamic picture and sends it to the corresponding receiving end. This allows the receiving end to update the display status of the corresponding dynamic picture after receiving the picture data at the end.
  • the embodiments of the present application can realize real-time update of the dynamic picture display state, and ensure the real-time performance of screen projection.
  • the second aspect of the embodiments of the present application provides a screen projection method, which is applied to the sending end, and includes:
  • the screen projection instruction obtain the real-time interface of the application to be screened and the device information of one or more receiving ends.
  • the first to-be-projected data corresponding to each receiving end is obtained from the real-time interface. If the first to-be-projected data includes user interface controls and/or video streams, then based on the first to-be-projected data, a display interface corresponding to each receiving end is generated, and the display interface is video-encoded to obtain a corresponding real-time video stream. Finally, the real-time video stream is sent to the corresponding receiving end, so that the receiving end decodes and plays the received real-time video stream.
  • the required screen projection data can be obtained from the real-time interface of the application program to be screened at the sending end according to the device information of the receiving end.
  • the embodiments of the present application can realize the flexible selection and projection of single or multiple data to be projected, and realize the adaptive projection of each receiving end. This makes the screen projection mode of the embodiment of the present application more flexible and can meet the personalized needs of users.
  • the real-time video synthesis and transmission of all the data to be projected through the sending end reduces the software and hardware requirements of the receiving end, and makes the compatibility of the receiving end stronger.
  • obtaining the first to-be-projected data corresponding to each receiving end from the real-time interface specifically includes:
  • the second to-be-projected data contained in the real-time interface is extracted, and the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data according to the device information.
  • the second to-be-projected data includes a video stream, an audio stream and/or a user interface control.
  • screening the data to be projected based on the device information of the receiving end can make the final projected data content more suitable for the actual device situation of the receiving end.
  • the final projected content can be more suitable for display at the receiving end, which can improve the user's human-computer interaction efficiency and improve user experience.
  • the device information includes: the display screen size and the distortion degree and frequency response of the audio output.
  • the operation of filtering the first to-be-projected data corresponding to each receiving end from the second to-be-projected data according to the device information specifically includes:
  • each second to-be-projected data is processed to obtain the corresponding data interaction score.
  • the data interaction score of each second to-be-projected data is matched based on the user experience score to obtain the first to-be-projected data corresponding to each receiving end.
  • the screen size and audio output quality of the receiving end are used to match the data to be projected, which guarantees the ultimate user experience at the receiving end.
  • generating a display interface corresponding to each receiving end based on the first to-be-projected data includes:
  • the user interface controls in the first to-be-projected data are typeset first, and the corresponding control layout file is obtained.
  • a corresponding display interface is generated.
  • the display effect of the user interface controls in the display interface of the receiving end can be guaranteed.
  • the efficiency of the user's human-computer interaction at the receiving end is guaranteed.
  • the application brand characteristics can also be retained to the greatest extent. It is convenient for the user to quickly become familiar with the display interface of the receiving end, thereby making the human-computer interaction operation of the user at the receiving end more convenient.
  • the display interface corresponding to each receiving end is generated based on the first to-be-projected data, including:
  • the drawing instruction and layer data corresponding to each user interface control in the first to-be-projected data are acquired, and the drawing instruction is used to make the receiving end draw the user interface control.
  • a corresponding display interface is generated, including:
  • layer data and control layout files draw corresponding user interface controls, and generate a display interface based on the drawn user interface controls.
  • the sending end can implement accurate drawing of the user interface controls, which ensures the accuracy and reliability of the finally generated user interface controls.
  • the fourth possible implementation of the second aspect includes:
  • the method further includes: obtaining a selection instruction input by the user, and obtaining one or more applications to be projected according to the selection instruction.
  • users can flexibly select screen projection applications according to their actual needs, so that the screen projection functions are more abundant and flexible.
  • the operation of extracting the second to-be-projected data contained in the real-time interface specifically includes:
  • the extracted real-time interface contains the second to-be-projected data.
  • the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data according to the device information Among the operations, the filtering operations on a single receiving end include:
  • each data set to be projected is not an empty set, and each of the data sets to be projected There is no intersection between them.
  • each data set to be projected is matched, and the second data to be projected included in the successfully matched data set to be projected is used as the first data to be projected corresponding to the receiving end.
  • the data to be projected to be displayed to be displayed is combined in different ways.
  • the receiver equipment information is matched.
  • Adaptive matching of the display data of various receiving end devices can be realized, so that the embodiments of the present application are more compatible with the receiving end.
  • the data content included in each data set to be projected can be preset, thus making the projecting effect of the embodiment of the present application richer and more flexible.
  • the typesetting operations on the user interface controls specifically include:
  • the second typesetting type corresponding to the first typesetting type is determined.
  • the corresponding relationship of the layout style of the user interface control suitable for human-computer interaction is selected for various receiving ends in advance. Then, according to the layout style of the user interface controls in the real-time interface, a suitable layout style for the receiving end is determined. Finally, the corresponding control layout file is generated with a more suitable layout style for the receiving end. This guarantees the final presentation mode of the user interface controls in the display interface of the receiving end, which can be more adapted to the needs of different receiving ends for human-computer interaction. The efficiency of human-computer interaction is improved.
  • the eighth possible implementation of the second aspect includes:
  • the second coordinate information and event type sent by the receiving end and execute the corresponding event task according to the second coordinate information and event type, where the event type is the type identification of the operation event by the receiving end after detecting the operation event Obtained, the second coordinate information is obtained by the receiving end after obtaining the first coordinate information of the operation event on the receiving end screen and performing coordinate conversion processing on the first coordinate information.
  • the receiving end converts the coordinate information and recognizes the event type
  • the sending end executes the corresponding event task according to the converted coordinate information and the event type, thereby realizing the counter-control of the receiving end to the sending end.
  • the user does not need to touch the sending end during the screen casting process, and can also realize the operation on the sending end. It greatly improves the efficiency of human-computer interaction between the user and the sending end, and at the same time makes the projection function more abundant and more flexible.
  • the matching operation on the first to-be-projected data specifically includes:
  • the user interface controls included in the video stream and the real-time interface are divided into one or more data sets, where each data set contains a video stream or at least one user interface control, and there is no intersection in each data set.
  • the user experience score is calculated for each data set to obtain the corresponding collective experience score.
  • the user experience score calculation is performed on the device information of the receiving end to obtain the corresponding device experience score.
  • the collection experience service and the equipment experience score are matched, and m data sets corresponding to the receiving end are determined. Among them, m is a natural number.
  • the data to be projected to be displayed is collected and divided first. Then, according to the impact of each data set on the user experience, and the impact of each receiving end on the user experience, the data set corresponding to each receiving end is filtered out. As a result, the data to be projected to be displayed on each receiving end is the data with better user experience, thereby ensuring the final user experience effect.
  • the first to-be-projected data further includes: to-be-notified data and/or somatosensory data.
  • the embodiments of the present application can support screen projection of more types of user experience design elements, so that the screen projection effect is better, the function richness is improved, and the function is more flexible.
  • the eleventh possible implementation manner of the second aspect includes:
  • the display state of the screened user interface controls changes.
  • the sending end obtains the changed attribute data of the user interface control, and updates the corresponding real-time video stream according to the changed attribute data.
  • the embodiments of the present application can implement real-time update of the display state of the user interface controls, and ensure the real-time performance of screen projection.
  • the twelfth possible implementation manner of the second aspect includes:
  • the display status of the projected dynamic picture changes.
  • the sending end obtains the new picture data corresponding to the dynamic picture, and updates the corresponding real-time video stream according to the obtained picture data.
  • the embodiments of the present application can realize real-time update of the dynamic picture display state, and ensure the real-time performance of screen projection.
  • the third aspect of the embodiments of the present application provides a screen projection method, which is applied to the receiving end, and includes:
  • the first to-be-projected data is after the sender obtains the real-time interface of the application to be projected, and extracts the second to-be-projected data contained in the real-time interface,
  • the sending end filters the second to-be-projected data according to the device information of the receiving end, where the second to-be-projected data includes a video stream, an audio stream, and/or a user interface control.
  • typesetting is performed on the user interface controls in the first to-be-projected data to obtain a corresponding control layout file.
  • a corresponding display interface is generated.
  • the receiving end after receiving the screen data to be screened by the sending end, the receiving end will typeset the user interface controls of the receiving end according to its own device information. Then generate the corresponding display interface according to the typesetting result.
  • This allows the embodiment of the present application to ensure the display effect of the user interface control in the display interface of the receiving end.
  • the efficiency of the user's human-computer interaction at the receiving end is guaranteed.
  • the application brand characteristics can also be retained to the greatest extent. It is convenient for the user to quickly become familiar with the display interface of the receiving end, thereby making the human-computer interaction operation of the user at the receiving end more convenient.
  • the typesetting operations on the user interface controls specifically include:
  • the second typesetting type corresponding to the first typesetting type is determined.
  • the corresponding relationship of the layout style of the user interface control suitable for human-computer interaction is selected for various receiving ends in advance. Then, according to the layout style of the user interface controls in the real-time interface, a suitable layout style for the receiving end is determined. Finally, the corresponding control layout file is generated with a more suitable layout style for the receiving end. This guarantees the final presentation mode of the user interface controls in the display interface of the receiving end, which can be more adapted to the needs of different receiving ends for human-computer interaction. The efficiency of human-computer interaction is improved.
  • a fourth aspect of the embodiments of the present application provides a screen projection system, including: a sending end and one or more receiving ends.
  • the sending end receives the screen projection instruction, obtains the real-time interface of the application to be screened and the device information of one or more receiving ends, and obtains the first screen-to-be-projected data corresponding to each receiving end from the real-time interface according to the device information.
  • the data to be projected is a video stream, an audio stream and/or a user interface control.
  • the sending end sends the first to-be-projected data to the corresponding receiving end.
  • the receiving end outputs the received first data to be projected.
  • the required screen projection data can be obtained from the real-time interface of the application program to be screened at the sending end according to the device information of the receiving end.
  • the embodiments of the present application can realize the flexible selection and projection of single or multiple data to be projected, and realize the adaptive projection of each receiving end. This makes the screen projection mode of the embodiment of the present application more flexible and can meet the personalized needs of users.
  • the sending end obtains the first to-be-projected data corresponding to each receiving end from the real-time interface, which specifically includes:
  • the second to-be-projected data contained in the real-time interface is extracted, and the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data according to the device information.
  • the second to-be-projected data includes a video stream, an audio stream and/or a user interface control.
  • the sending end screens the data to be projected according to the device information of the receiving end, which can make the final projected data content more suitable for the actual device situation of the receiving end.
  • the final projected content can be more suitable for display at the receiving end, which can improve the user's human-computer interaction efficiency and improve user experience.
  • the device information includes: the display screen size and the distortion degree and frequency response of the audio output.
  • the operation of the sending end to filter the first to-be-projected data corresponding to each receiving end from the second to-be-projected data according to the device information specifically includes:
  • each second to-be-projected data is processed to obtain the corresponding data interaction score.
  • the data interaction score of each second to-be-projected data is matched based on the user experience score to obtain the first to-be-projected data corresponding to each receiving end.
  • the screen size and audio output quality of the receiving end are used to match the data to be projected, which guarantees the ultimate user experience at the receiving end.
  • the receiving end Before sending the first to-be-projected data to the corresponding receiving end, the receiving end first performs typesetting processing on the user interface controls in the first to-be-projected data based on the device information, and obtains the corresponding control layout file.
  • the sending end of the operation of sending the first to-be-projected data to the corresponding receiving end specifically includes:
  • the sending end sends the first to-be-projected screen data and the control layout file to the corresponding receiving end.
  • the operation of the receiving end to generate a display interface includes:
  • the receiving end generates a corresponding display interface according to the received first to-be-projected data and the control layout file.
  • the display effect of the user interface controls in the display interface of the receiving end can be guaranteed.
  • the efficiency of the user's human-computer interaction at the receiving end is guaranteed.
  • the application brand characteristics can also be retained to the greatest extent. It is convenient for the user to quickly become familiar with the display interface of the receiving end, thereby making the human-computer interaction operation of the user at the receiving end more convenient.
  • the receiving end generates a corresponding display interface operation according to the received first to-be-projected data, which specifically includes:
  • the receiving end performs typesetting processing on the user interface controls in the first to-be-projected data based on its own device information to obtain a corresponding control layout file.
  • the receiving end generates a corresponding display interface according to the first to-be-projected data and the obtained control layout file.
  • the receiving end after receiving the screen data to be screened by the sending end, the receiving end will typeset the user interface controls of the receiving end according to its own device information. Then generate the corresponding display interface according to the typesetting result.
  • This allows the embodiment of the present application to ensure the display effect of the user interface control in the display interface of the receiving end. The efficiency of the user's human-computer interaction at the receiving end is guaranteed.
  • the application brand characteristics can also be retained to the greatest extent.
  • the fourth possible implementation manner of the fourth aspect includes:
  • the sending end Before the sending end sends the control layout file to the corresponding receiving end, the sending end obtains the drawing instructions and layer data corresponding to each user interface control in the first to-be-projected data.
  • the drawing instructions are used to make the receiving end draw the user interface controls. .
  • the sending end sends the first to-be-projected data and the control layout file to the corresponding receiving end, so that the receiving end generates the corresponding display interface operation according to the received first to-be-projected data and the control layout file, include:
  • the sending end sends the drawing instruction, layer data and control layout file to the corresponding receiving end.
  • the operation of the receiving end to generate a display interface includes:
  • the receiving end draws corresponding user interface controls according to drawing instructions, layer data, and control layout files, and generates a display interface based on the drawn user interface controls.
  • the receiving end by sending the drawing instruction and layer data to the receiving end, the receiving end can realize accurate drawing of the user interface control, which ensures the accuracy and reliability of the finally generated user interface control.
  • the fifth possible implementation of the fourth aspect includes:
  • the sending end further includes: obtaining a selection instruction input by the user, and obtaining one or more applications to be projected according to the selection instruction.
  • users can flexibly select screen projection applications according to their actual needs, so that the screen projection functions are more abundant and flexible.
  • the operation of the sender to extract the second to-be-projected data contained in the real-time interface specifically includes:
  • the sender obtains the real-time interface of the application to be screened, and recognizes whether the real-time interface is a preset interface.
  • the sending end extracts the real-time interface that contains the second to-be-projected screen data.
  • the first to-be-projected data corresponding to each receiving end is filtered from the second to-be-projected data.
  • the filtering operation of a single receiving end includes:
  • the sending end divides the video stream and user interface controls in the second to-be-projected data into one or more to-be-projected data sets, where each of the to-be-projected data sets is not an empty set, and each of the to-be-projected data sets There is no intersection between sets.
  • the sending end matches each data set to be projected based on the device information of the receiving end, and uses the second data to be projected included in the successfully matched data set to be projected as the first data to be projected corresponding to the receiving end .
  • the data to be projected to be displayed to be displayed is combined in different ways.
  • the receiver equipment information is matched.
  • Adaptive matching of the display data of various receiving end devices can be realized, so that the embodiments of the present application are more compatible with the receiving end.
  • the data content included in each data set to be projected can be preset, thus making the projecting effect of the embodiment of the present application richer and more flexible.
  • the typesetting operation of the user interface control by the sending end or the receiving end specifically includes:
  • the second typesetting type corresponding to the first typesetting type is determined.
  • the corresponding relationship of the layout style of the user interface control suitable for human-computer interaction is selected for various receiving ends in advance. Then, according to the layout style of the user interface controls in the real-time interface, a suitable layout style for the receiving end is determined. Finally, the corresponding control layout file is generated with a more suitable layout style for the receiving end. This guarantees the final presentation mode of the user interface controls in the display interface of the receiving end, which can be more adapted to the needs of different receiving ends for human-computer interaction. The efficiency of human-computer interaction is improved.
  • the ninth possible implementation of the fourth aspect includes:
  • the receiving end After detecting the operation event, the receiving end recognizes the event type of the operation event, and obtains the first coordinate information of the operation event on the receiving end screen.
  • the receiving end performs coordinate conversion processing on the first coordinate information to obtain corresponding second coordinate information.
  • the receiving end sends the second coordinate information and the event type to the sending end.
  • the sending end receives the second coordinate information and the event type, and executes the corresponding event task according to the second coordinate information and the event type.
  • the receiving end converts the coordinate information and recognizes the event type
  • the sending end executes the corresponding event task according to the converted coordinate information and the event type, thereby realizing the counter-control of the receiving end to the sending end.
  • the user does not need to touch the sending end during the screen casting process, and can also realize the operation on the sending end. It greatly improves the efficiency of human-computer interaction between the user and the sending end, and at the same time makes the projection function more abundant and more flexible.
  • the matching operation of the sender on the first to-be-projected data specifically includes:
  • the sending end divides the video stream and the user interface controls contained in the real-time interface into one or more data sets, where each data set contains a video stream or at least one user interface control, and there is no intersection in each data set.
  • the sending end calculates the user experience score for each data set, and obtains the corresponding set experience score.
  • the user experience score calculation is performed on the device information of the receiving end to obtain the corresponding device experience score.
  • the collection experience service and the equipment experience score are matched, and m data sets corresponding to the receiving end are determined. Among them, m is a natural number.
  • the sending end calculates the user experience score of the audio stream to obtain the corresponding audio experience score. Match the audio experience service and the device experience score. If the matching is successful, it is determined that the audio stream is the first to-be-projected data corresponding to the to-be-projected application.
  • the data to be projected to be displayed is collected and divided first. Then, according to the impact of each data set on the user experience, and the impact of each receiving end on the user experience, the data set corresponding to each receiving end is filtered out. As a result, the data to be projected to be displayed on each receiving end is the data with better user experience, thereby ensuring the final user experience effect.
  • the first to-be-projected data further includes: to-be-notified data and/or somatosensory data.
  • the embodiments of the present application can support screen projection of more types of user experience design data, so that the screen projection effect is better, the function richness is improved, and the function is more flexible.
  • the twelfth possible implementation manner of the fourth aspect includes:
  • the display state of the screened user interface controls changes.
  • the sending end obtains the changed attribute data of the user interface control, and sends the changed attribute data to the corresponding receiving end.
  • the receiving end updates the display state of the user interface controls in the display interface according to the received attribute data.
  • the embodiments of the present application can implement real-time update of the display state of the user interface controls, and ensure the real-time performance of screen projection.
  • the thirteenth possible implementation manner of the fourth aspect includes:
  • the sending end obtains the new picture data corresponding to the dynamic picture and sends it to the corresponding receiving end.
  • the receiving end After receiving the picture data at the receiving end, the receiving end updates the display status of the corresponding dynamic picture.
  • the embodiments of the present application can realize real-time update of the dynamic picture display state, and ensure the real-time performance of screen projection.
  • the fourth aspect is a system solution corresponding to the above-mentioned first aspect and the third aspect. Therefore, the beneficial effects of the fourth aspect can be referred to the relevant descriptions in the above-mentioned first aspect and the third aspect, which will not be repeated here.
  • the fifth aspect of the embodiments of the present application provides a screen projection device, including:
  • the data acquisition module is used to acquire the real-time interface of the application to be projected and the device information of one or more receiving terminals when the projection instruction is acquired, and obtain the first corresponding to each receiving terminal from the real-time interface according to the device information.
  • the data to be projected, where the first data to be projected is a video stream, an audio stream and/or a user interface control.
  • the data sending module is used to send the first data to be projected to the corresponding receiving end, so that each receiving end generates a corresponding display interface according to the received first data to be projected.
  • the fifth aspect is a device solution corresponding to the above-mentioned first aspect. Therefore, the beneficial effects of the fifth aspect can be referred to the related description in the above-mentioned first aspect, which will not be repeated here.
  • a sixth aspect of the embodiments of the present application provides a screen projection device, including:
  • the data acquisition module is used to acquire the real-time interface of the application to be projected and the device information of one or more receiving terminals when the projection instruction is acquired, and obtain the first corresponding to each receiving terminal from the real-time interface according to the device information.
  • the data to be projected, where the first data to be projected is a video stream, an audio stream and/or a user interface control.
  • the interface generation module is configured to generate a display interface corresponding to each of the receiving ends based on the first to-be-projected data, and perform video encoding on the display interface to obtain a corresponding real-time video stream.
  • the video sending module is configured to send the real-time video stream to the corresponding receiving end, so that the receiving end decodes and plays the received real-time video stream.
  • the sixth aspect is a device solution corresponding to the above-mentioned second aspect. Therefore, the beneficial effects of the sixth aspect can be referred to the related description in the above-mentioned second aspect, which will not be repeated here.
  • a seventh aspect of the embodiments of the present application provides a terminal device.
  • the terminal device includes a memory and a processor.
  • the memory stores a computer program that can run on the processor.
  • the processor executes the In the case of a computer program, the terminal device is made to implement the steps of the screen projection method as described in any one of the above-mentioned first aspect, or the steps of the screen projection method as described in any one of the above-mentioned second aspect, or as in the above third aspect Any of the steps of the projection method.
  • An eighth aspect of the embodiments of the present application provides a computer-readable storage medium, including: a computer program is stored, and the computer program is characterized in that, when the computer program is executed by a processor, the terminal device realizes any of the above-mentioned first aspects.
  • a computer program is stored, and the computer program is characterized in that, when the computer program is executed by a processor, the terminal device realizes any of the above-mentioned first aspects.
  • the ninth aspect of the embodiments of the present application provides a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the screen projection method described in any one of the above-mentioned first aspects, or as described in the above-mentioned second aspect.
  • FIG. 1A is a schematic structural diagram of a mobile phone provided by an embodiment of the present application.
  • FIG. 1B is a software structure block diagram of a terminal device provided by an embodiment of the present application.
  • 2A is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 2B is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Figure 2C is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Figure 2D is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Figure 2E is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • 2F is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2G is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • 2H is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2I is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • Terminal equipment manufacturers, automobile manufacturers, and third-party application developers design the interface system corresponding to the terminal equipment system in the car in advance.
  • the user connects the terminal device with the car machine when in use, and can realize the interaction with the terminal device through the interface system in the car machine, and then realize the screen projection of the terminal device.
  • screen mirroring will send all the content on the screen of the sending end to the receiving end.
  • some content on the screen may not be what the user wants to post, such as some sensitive information contained on the screen.
  • the mirror image projection is not friendly to the receiving end with a smaller display screen. For example, if the screen of a mobile phone is mirrored to a smart watch, since the display screen of the smart watch is generally small, the user cannot normally view the content of the screen on the smart watch. Therefore, the implementation mode 1 has a single function and extremely low flexibility, and cannot meet the individual needs of users.
  • implementation mode 2 file-based cross-terminal device delivery can only perform video encoding and decoding of the file for screen projection, and the user cannot perform screen projection for other content of the application interface. Therefore, implementation mode 2 has a fixed display content, a single function, and is inflexible, and cannot meet the individual needs of users.
  • the terminal device manufacturer formulates the template rules of the interface system, and the third-party application developer fills the user interface (User's Interface, UI) controls or video streams according to the template rules.
  • the user interface User's Interface, UI
  • UI User's Interface
  • the user can select the application program that needs to be screened according to actual needs.
  • the application program On the basis of selecting the application program, first conduct interface recognition and UI control analysis of the application program.
  • the UI controls are matched according to the actual device conditions of each receiving end, and the appropriate UI controls for each receiving end are determined.
  • the receiving end displays the synthesized content.
  • the user can select the content of the screen projection by himself according to his actual needs. And according to the actual situation of the receiving end, adaptive selection, layout and final screen projection of the projection UI controls can be carried out.
  • Both the sending end and the receiving end in the embodiments of the present application may be terminal devices such as mobile phones, tablet computers, and wearable devices.
  • the specific terminal device types of the sender and receiver are not limited here, and can be determined according to actual scenarios. For example, when a mobile phone is used to project a screen to a smart watch and a TV in an actual scene, the mobile phone is the sending end, and both the smart watch and the TV are the receiving end.
  • FIG. 1A shows a schematic structural diagram of the mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, Audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, SIM card interface 195 and so on.
  • a processor 110 an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, Audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, SIM card interface 195 and so on.
  • the sensor module 180 can include a gyroscope sensor 180A, an acceleration sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an ambient light sensor 180E, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, and a touch sensor 180K (of course, the mobile phone 100 also Other sensors may be included, such as temperature sensors, pressure sensors, distance sensors, bone conduction sensors, etc. (not shown in the figure).
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU) Wait.
  • AP application processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural network Processing Unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may run the screen projection method provided in the embodiment of the present application, so as to enrich the screen projection function, improve the flexibility of the screen projection, and improve the user experience.
  • the processor 110 may include different devices. For example, when a CPU and a GPU are integrated, the CPU and GPU can cooperate to execute the projection method provided in the embodiment of the present application. For example, in the projection method, part of the algorithm is executed by the CPU, and another part of the algorithm is executed by the GPU. In order to get faster processing efficiency.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the display screen 194 may be used to display information input by the user or information provided to the user, as well as various graphical user interfaces (GUI).
  • GUI graphical user interfaces
  • the display 194 may display photos, videos, web pages, or files.
  • the display 194 may display a graphical user interface.
  • the graphical user interface includes a status bar, a hidden navigation bar, time and weather widgets, and application icons, such as browser icons.
  • the status bar includes the name of the operator (for example, China Mobile), mobile network (for example, 4G), time, and remaining power.
  • the navigation bar includes a back button icon, a home button icon, and a forward button icon.
  • the status bar may also include a Bluetooth icon, a Wi-Fi icon, an external device icon, and the like.
  • the graphical user interface may also include a Dock bar, and the Dock bar may include commonly used application icons and the like.
  • the display screen 194 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the processor 110 may control the external audio output device to switch the output audio signal.
  • the camera 193 (a front camera or a rear camera, or a camera can be used as a front camera or a rear camera) is used to capture still images or videos.
  • the camera 193 may include photosensitive elements such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lens or concave lens) for collecting light signals reflected by the object to be photographed and transmitting the collected light signals to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store operating system, application program (such as camera application, WeChat application, etc.) codes and so on.
  • the data storage area can store data created during the use of the mobile phone 100 (for example, images and videos collected by a camera application) and the like.
  • the internal memory 121 may also store one or more computer programs 1310 corresponding to the screen projection method provided in the embodiment of the present application.
  • the one or more computer programs 1304 are stored in the aforementioned memory 211 and configured to be executed by the one or more processors 110, the one or more computer programs 1310 include instructions, and the computer programs 1310 may include an account verification module 2111.
  • Priority comparison module 2112. Among them, the account verification module 2111 is used to authenticate the system authentication accounts of other terminal devices in the local area network; the priority comparison module 2112 may be used to compare the priority of the audio output request service with the priority of the current output service of the audio output device.
  • the state synchronization module 2113 can be used to synchronize the device state of the audio output device currently connected by the terminal device to other terminal devices, or synchronize the device state of the audio output device currently connected by other devices to the local.
  • the processor 110 may control the sending end to process the projection data.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the code of the screen projection method provided in the embodiment of the present application can also be stored in an external memory.
  • the processor 110 can run the screen projection method code stored in the external memory through the external memory interface 120, and the processor 110 can control the sending end to process the projection data.
  • the function of the sensor module 180 is described below.
  • the gyroscope sensor 180A can be used to determine the movement posture of the mobile phone 100.
  • the angular velocity of the mobile phone 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180A can be used to detect the current motion state of the mobile phone 100, such as shaking or static.
  • the gyroscope sensor 180A can be used to detect folding or unfolding operations on the display screen 194.
  • the gyroscope sensor 180A may report the detected folding operation or unfolding operation as an event to the processor 110 to determine the folding state or unfolding state of the display screen 194.
  • the acceleration sensor 180B can detect the magnitude of the acceleration of the mobile phone 100 in various directions (generally three axes). That is, the gyro sensor 180A can be used to detect the current motion state of the mobile phone 100, such as shaking or static. When the display screen in the embodiment of the present application is a foldable screen, the acceleration sensor 180B can be used to detect folding or unfolding operations on the display screen 194. The acceleration sensor 180B may report the detected folding operation or unfolding operation as an event to the processor 110 to determine the folding state or unfolding state of the display screen 194.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone emits infrared light through light-emitting diodes.
  • Mobile phones use photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the phone. When insufficient reflected light is detected, the mobile phone can determine that there is no object near the mobile phone.
  • the proximity light sensor 180G can be arranged on the first screen of the foldable display screen 194, and the proximity light sensor 180G can detect the first screen according to the optical path difference of the infrared signal.
  • the gyroscope sensor 180A (or the acceleration sensor 180B) may send the detected motion state information (such as angular velocity) to the processor 110.
  • the processor 110 determines whether it is currently in the hand-held state or the tripod state based on the motion state information (for example, when the angular velocity is not 0, it means that the mobile phone 100 is in the hand-held state).
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position of the display screen 194.
  • the display screen 194 of the mobile phone 100 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • the display screen 194 displays an interface of the camera application, such as a viewfinder interface.
  • the wireless communication function of the mobile phone 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied on the mobile phone 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • the mobile communication module 150 can also be used to exchange information with other terminal devices, that is, to send screen projection related data to other terminal devices, or the mobile communication module 150 can be used to receive screen projection requests and receive The screencast request is encapsulated into a message in a specified format.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the wireless communication module 160 is configured to establish a connection with the receiving end, and display the projected content through the receiving end. Or the wireless communication module 160 may be used to access the access point device, send a message corresponding to a screen projection request to other terminal devices, or receive a message corresponding to an audio output request sent from another terminal device.
  • the mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the mobile phone 100 can receive the key 190 input, and generate key signal input related to the user settings and function control of the mobile phone 100.
  • the mobile phone 100 can use the motor 191 to generate a vibration notification (for example, an incoming call vibration notification).
  • the indicator 192 in the mobile phone 100 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 in the mobile phone 100 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195.
  • the mobile phone 100 may include more or less components than those shown in FIG. 1A, which is not limited in the embodiment of the present application.
  • the illustrated mobile phone 100 is only an example, and the mobile phone 100 may have more or fewer parts than shown in the figure, may combine two or more parts, or may have a different part configuration.
  • the various components shown in the figure may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the software system of the terminal device can adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to exemplify the software structure of the terminal device.
  • Fig. 1B is a software structure block diagram of a terminal device according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, and so on.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the telephone manager is used to provide the communication function of the terminal device. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompt text messages in the status bar sound a prompt tone, terminal equipment vibration, flashing indicator lights, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.164, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • FIG. 2A shows an implementation flowchart of the screen projection method provided in Embodiment 1 of the present application, and the details are as follows:
  • the sending end receives a screen projection instruction, and obtains an application to be screened.
  • the sender When receiving the screen projection instruction, the sender will start the screen projection function and execute the operation of S101.
  • the screen projection instruction may be input by the user or a third-party device, or may be actively generated by the sending end itself. The specific needs to be determined by the actual application scenario.
  • the application program (hereinafter referred to as the application) is used as the object for screen projection. Therefore, before starting to cast the screen, the sender must first confirm the application that needs to be cast this time.
  • the following methods can be used to confirm the application to be screened:
  • the sending end After the sending end starts the screen projection function of the sending end, the user selects one or more applications to be cast on the sending end according to actual needs.
  • the sending end determines the specific application that needs to be screened according to the selection instruction input by the user, so as to meet the user's personalized screen projection requirements. For example, refer to Figure 2B.
  • the sender is a mobile phone
  • the mobile phone activates the screen projection function
  • the "screen projection application selection” interface will be opened.
  • the user can freely select one or more applications to be projected on the "screening application selection” interface and click to confirm. For example, you can select only the "Map” application to cast the screen, or you can select both the "Music Player” and “Map” applications to cast the screen at the same time.
  • the technician or user pre-sets one or more applications for the default projection.
  • the sender activates the projection function of the sender
  • the default projection application is set as the application for this projection.
  • the user does not need to select the application every time, which makes the operation more convenient. For example, you can also refer to Figure 2B, assuming that the sender is a mobile phone.
  • the user can manually enter the "screening application selection" interface before starting the screen-casting function, and select the default application.
  • the application to be screened can be selected according to the default application set by the user.
  • Method 3 allows users to more flexibly control the application situation of each screen projection.
  • the sender selects one or more applications to be screened according to certain preset rules.
  • the technician can set some application selection rules in advance, for example, it can be set to select all applications that are running and support screen projection as applications to be screened. After the screen projection function is activated, the application to be screened is automatically selected according to the preset rule.
  • the embodiment of the present application does not limit the development method of the screen projection function, and can be operated by a technician according to the actual situation.
  • the screen projection function can be built into the operating system of the sending end as a system function.
  • some terminal devices are difficult to build in the screen projection function due to technical and cost constraints.
  • the screen projection function may be set in a screen projection application. By installing the screen projection application to the sending end, the sending end can have the screen projection function in the embodiment of the present application.
  • the startup mode of the screen projection function can be divided into two types: passive activation and active activation of the sending end, which are described as follows:
  • Passive start refers to the screen projection command input by the user or a third-party device at the sending end, which triggers the start of the screen projection function.
  • the user manually starts the screen projection function of the sending end, or the third-party device triggers the screen projection function of the sending end by sending a start instruction to the sending end.
  • Active start It means that the sender itself actively generates a screen-casting instruction to start the screen-casting function. For example, the user can set a time point at the sender to automatically start the screen projection function, such as 8 o'clock in the evening every day. At this point in time, the sending end actively generates a screen projection instruction and turns on the screen projection function.
  • the user can select any of the above methods to activate the screen projection function of the sender according to actual needs.
  • S102 The sending end recognizes the real-time interface of the application to be projected, and judges whether it is a preset interface.
  • an application may have a very large number of interfaces. For example, some music players not only have a music playing interface and a music list interface, but also a song information browsing interface, an artist information browsing interface, a comment browsing interface, an advertisement interface, and a web browsing interface. In actual applications, it is found that too many application interface projections will affect the normal use of users and reduce user experience. For example, some advertising interfaces on the screen may affect the normal viewing of the content of the screen application by the user.
  • the interface in the application is first divided into two types: a basic interface (ie, a preset interface) and a non-basic interface.
  • the basic interface mainly refers to some core interfaces in the application, including some interfaces that contain basic functions or key functions.
  • the playback interface in the music player the navigation interface in the map. It can also include some interfaces that have a greater impact on the user experience, such as the song information interface.
  • the details can be divided by technicians according to actual needs.
  • the division of the basic interface can be based on a single application as a unit.
  • each application has a clear corresponding basic interface and non-basic interface. It can also be divided based on application types.
  • the unified setting of all music players as the basic interface includes: a music playing interface and a music list interface.
  • this embodiment of the application when performing screen projection operations, this embodiment of the application will perform operations such as UI control analysis and matching on the basic interface to ensure the screen projection effect on the basic interface . Therefore, after the application to be projected is determined in S101, the embodiment of the present application will perform real-time interface identification of the application to be projected, and determine whether the real-time interface is the basic interface of the application to be projected.
  • the specific basic interface recognition method is not limited here, and can be set by the technicians according to actual needs.
  • the basic interface corresponding to the application to be screened cannot be directly confirmed at this time .
  • a recognition model that can recognize the application type through the display interface is pre-trained. The recognition model is then used to recognize the real-time interface to obtain the application type of the application to be screened. And according to the application type, find out the basic interface corresponding to the application to be projected.
  • the real-time interface is the basic interface corresponding to the application to be screened.
  • the real-time interface of the application to be projected is a music playing interface
  • the recognition model it can be identified that the application to be projected is a music player type application.
  • the basic interface corresponding to the music player type application is the music playing interface and the music list interface.
  • the real-time interface belongs to the music playing interface or the music list interface. If this is the case, it means that the real-time interface is the basic interface of the application to be screened.
  • the types and training methods of specific recognition models are not excessively limited in the embodiments of the present application, and can be selected or designed by technicians according to actual needs.
  • a residual network model can be selected as the recognition model in the embodiment of the present application.
  • the training method can be set to collect multiple interface sample images in advance, and mark each interface sample image with a corresponding application type, for example, a music playing interface corresponds to a music player type. On this basis, the interface sample image is used for model training, so as to obtain a recognition model that can recognize the application type according to the interface image.
  • the sender can obtain the package name.
  • the same method as described above for the application to be projected on which the package name cannot be obtained can also be used for identification.
  • the corresponding basic interfaces are set as the music playing interface and the music list interface, and the other interfaces are all non-basic interfaces.
  • the package name of the application A to be projected is obtained as "xx music player”
  • it can be confirmed that the application A to be projected is a music player
  • its corresponding basic interfaces are the music playing interface and the music list interface.
  • identify whether the real-time interface is a music playing interface or a music list interface. If it is one of them, the real-time interface is determined to be the basic interface; if none of them, the real-time interface is determined to be a non-basic interface.
  • the application illustrates that for a single running application, it may be running in the background or in the foreground.
  • the real-time interface will not be displayed on the screen of the sender, but will be displayed virtually in the background. Therefore, for the application to be projected running in the background, the real-time interface of the application to be projected can be obtained by acquiring the interface virtually displayed in the background of the application to be projected.
  • the sending end extracts UI controls from the real-time interface to obtain UI controls included in the real-time interface.
  • UI controls are extracted on the real-time interface, and the UI controls are screened and screened.
  • the method for extracting UI controls and attribute data in the real-time interface is not limited here, and can be selected or set by a technician according to actual needs.
  • the extraction operation is as follows:
  • Part of the attribute data of the applied UI control is recorded in the view node data.
  • the attribute data of the unidentified UI control is excluded from the view node data.
  • the real-time interface is identified by the UI control through the control recognition model, and the attribute data corresponding to the unidentified UI control is synchronously excluded from the view node data.
  • the UI controls contained in the real-time interface and the attribute data of the UI controls can be obtained, which provides basic data for subsequent matching and re-typesetting of the UI controls.
  • the model type and training method of the control recognition model are not limited here, and can be selected or set by a technician according to actual needs.
  • the R-CNN model, Fast-R-CNN model or YOLO model can be selected as the control recognition model.
  • the training method is: taking multiple application interface images containing UI controls as sample data, and training the control recognition model to recognize the UI control area until the preset convergence condition is met, and the model training is completed.
  • the residual network model can be used as the control recognition model.
  • the training method is: pre-cutting the image of the UI control on the application interface image, marking the control type corresponding to each UI control image, and using the obtained UI control image as sample data. Perform UI control image recognition training on the control recognition model based on the sample data until the preset convergence conditions are met, and the model training is completed.
  • the residual network model can be used as the control recognition model.
  • the sample data is the UI control image obtained by performing image cropping on the application interface image. Due to the background interference of the parent node, there will often be certain noise data in the sample data, for example, refer to part (a) in FIG. 2C. This will have a certain impact on the recognition effect of the final control recognition model.
  • the training method is: based on the interface provided in the sending end system in advance, the UI control is drawn independently by drawing instructions. At this time, a UI control image without background noise can be obtained, for example, refer to part (b) in FIG. 2C. At the same time, mark the control type corresponding to each UI control image, and use the obtained image corresponding to each UI control as sample data. Perform UI control image recognition training on the control recognition model based on the sample data until the preset convergence conditions are met, and the model training is completed.
  • the model training may further include: taking a minimum effective pixel range for the drawn UI control image.
  • the UI control image takes the minimum effective pixel range, which refers to removing the pixels outside the rectangular frame of the UI control to realize the elimination of white space around the UI control image.
  • part (b) is correct (a ) Part of the UI control image obtained after taking the minimum effective pixel range processing.
  • the interface provided in the sending end system can be used to draw the images of each UI control in the real-time interface. Then use the drawn UI control image as input data for model processing and recognition. Wherein, before inputting the control recognition model, the above-mentioned processing of taking the minimum effective pixel range can be performed on the UI control image to enhance the accuracy of recognition.
  • the embodiment of the application does not limit the specific control recognition model type and training method to be used too much.
  • Technicians can choose one of the above three methods to train the control recognition model according to actual needs. For example, if you want to eliminate the influence of background noise in the UI control image and improve the recognition effect of the control recognition model, you can select method 3 to train the control recognition model. However, if the drawing of the UI control image cannot be achieved, for example, the coordination of the system interface of the sending end cannot be obtained. At this time, you can select mode 1 or mode 2 for processing. At the same time, the technicians can also select or design other methods to train to obtain the required control recognition model.
  • the real-time interface of the application to be screened may not be a basic interface.
  • the real-time interface is a non-basic interface, to ensure the user's human-computer interaction operation and experience.
  • any one of the following methods can be selected for processing:
  • any of the above methods can be used for display processing.
  • the embodiment S102 of the present application detects that the real-time interface is the basic interface, the embodiment of the present application will continue to perform the operation of S103.
  • the sending end obtains the audio stream and the video stream of the application to be projected. Obtain the device information of each receiving end. Based on the device information, n user experience design elements corresponding to each receiving end are matched from UI controls, audio streams, and video streams. Among them, n is a natural number.
  • the number of receiving ends may be one or an integer greater than one.
  • the receiving end can be devices and screens that include independent systems or non-independent systems.
  • peripheral terminal devices may include a head-up display, a liquid crystal instrument panel, a vehicle central control screen, etc., all of which can be used as the receiving end in the embodiments of the present application.
  • the sending end when the sending end is a mobile phone, the receiving end may include: a tablet computer, a personal computer, a TV, a car machine, a smart watch, a headset, a stereo, a virtual reality device, and an augmented reality device at the same time. Or according to actual needs, you can add more other devices.
  • UX elements also known as human-computer interaction elements
  • UX elements refer to data elements in terminal devices that can affect user experience.
  • UX elements include
  • Audio stream refers to the audio stream of the media type, such as music played by a music player.
  • the video stream can be mirrored video data or video data in a specific layer.
  • the data to be notified refers to the service data in use on the terminal device.
  • the information of the song being played (song name, artist and album pictures, etc.), the information in the process of the call (contacts and phone numbers, etc.), and push data of some applications, etc.
  • the data to be notified will generally be pushed and displayed in the status bar of the mobile phone, or pushed and displayed in the form of a pop-up window.
  • the data to be notified may exist independently of the application to be screened, for example, it may be a warning notification issued by the sending end system.
  • an application may contain more UX elements.
  • These hardware configurations and interaction methods will affect the viewing and interaction of UX elements on the receiving end by the end user, thereby affecting the user experience. Therefore, in the embodiments of the present application, all the content of the application to be screened will not be directly delivered to the receiving end, but appropriate UX elements will be adaptively matched for delivery according to the device information of the receiving end.
  • the specific adaptive matching method is not too limited here, and can be selected or set by the technician according to actual needs.
  • the number of UX elements that may be successfully matched at each receiving end may be less, such as 0, or more, such as all UX elements are successfully matched. Therefore, in the embodiment of the present application, the value of n needs to be determined by the actual matching situation.
  • the first data to be projected and the second data to be projected both refer to UX elements.
  • the second to-be-projected data refers to all the extracted UX elements.
  • the first data to be projected refers to the UX elements filtered out after matching the receiving end.
  • the adaptive matching operation on UX elements includes:
  • the UI controls contained in the video stream and the real-time interface are divided into one or more element sets, where each element set includes a video stream or at least one UI control, and there is no intersection in each element set.
  • the user experience score is calculated for each element set, and the corresponding set experience score is obtained.
  • the user experience score calculation is performed on the device information of the receiving end to obtain the corresponding device experience score.
  • the set of experience service and device experience scores are matched, and m element sets corresponding to the receiving end are determined. Among them, m is a natural number.
  • the user experience score is a quantitative score value for the influence of the UX element or the terminal device on the user experience.
  • the user experience score of the UX element can also be referred to as the data interaction score of the UX element.
  • the specific user experience score calculation method is not limited here, and can be selected or set by the technician according to actual needs.
  • a comprehensive score can be made from three dimensions: visual effect dimension, sound effect dimension, and interaction complexity.
  • the visual effect dimension can be evaluated by using the size of the UX element and the display screen size of the terminal device (or the available display screen size), such as setting corresponding experience scores for different sizes.
  • the audio stream quality and the sound quality of the terminal device can be selected for evaluation, such as setting corresponding experience scores for different quality and sound quality.
  • the sound quality of the terminal device can be characterized by the distortion and frequency response of the audio output of the terminal device.
  • the interaction complexity can also be evaluated from the size of the UX element and the size of the display screen of the terminal device.
  • the user experience scores corresponding to UX elements or terminal devices can be obtained by summing the experience scores of each dimension.
  • processing and calculation can be performed according to the attribute data of the UI control.
  • the entire element set can be used as a whole to calculate the user experience score.
  • the experience scores corresponding to the size of each UX element in the element set can be summed to obtain the experience scores corresponding to the element set.
  • UX elements can be divided into two categories according to whether they need to be displayed on the receiving end.
  • One is the UX elements that need to be displayed, such as video streams and UI controls.
  • the other is the UX element that does not need to be displayed, such as audio streaming.
  • these UX elements that need to be displayed will affect the user's visual effects and human-computer interaction effects, thereby affecting the user experience. Therefore, in the embodiment of the present application, the UX elements that need to be displayed are divided into one or more element sets. And use each element set as the smallest projection unit to match and project the receiving end.
  • the number of UX elements contained in each element set is not limited here, and can be set by technicians according to actual needs, as long as it is not an empty set.
  • the number of UX elements contained in each element set is 1, it means that the screen is projected in a unit of a single UX element.
  • the screen projection can be realized with a single type of UX element as a unit.
  • the UX elements contained in the real-time interface can be divided into element set A and element set B.
  • all elements in element set A are control-type UX elements, such as playback controls, fast forward controls, and video switching controls in a video player.
  • the element set B is all non-control UX elements, such as video playback controls, video streaming, and advertising controls.
  • the screen projection can be realized in the unit of the entire real-time interface. That is, each screencast is a complete screencast of all UI controls and video streams of the real-time interface.
  • all UX elements included in the m element sets are UX elements that match the receiving end. Among them, the value of m is determined by the actual scene.
  • the corresponding user experience scores can be evaluated separately to determine whether the receiving end is suitable for audio stream output.
  • the calculation method of the user experience score please refer to the above description of the calculation method of the user experience score of the UX element that needs to be displayed, which will not be repeated here.
  • S1042 Obtain the data to be notified of the sending end, and the audio stream and video stream of the application to be cast. Obtain the device information of each receiving end. Based on the device information, n user experience design elements corresponding to the receiving end are matched from UI controls, audio streams, video streams, and data to be notified. Among them, n is a natural number. If the n user experience design elements include UI controls, S105 is executed.
  • S1043 Acquire audio stream, video stream and somatosensory data of the application to be projected. Obtain the device information of each receiving end. Based on the device information, n user experience design elements corresponding to the receiving end are matched from UI controls, audio streams, video streams, and somatosensory data. Among them, n is a natural number. If the n user experience design elements include UI controls, S105 is executed.
  • S1044 Obtain the data to be notified of the sending end, as well as the audio stream, video stream, and somatosensory data of the application to be screened. Obtain the device information of each receiving end. Based on the device information, n user experience design elements corresponding to the receiving end are matched from UI controls, audio streams, video streams, data to be notified, and somatosensory data. Among them, n is a natural number. If the n user experience design elements include UI controls, S105 is executed.
  • the data to be notified belong to the UX elements that need to be displayed, and the somatosensory data belong to the UX elements that do not need to be displayed. Therefore, if the user experience score calculation method described above is used to perform an adaptive matching operation of UX elements.
  • the data to be notified it needs to be divided into one or more element sets together with the UI controls and the video stream, and the user experience score is calculated and matched.
  • the somatosensory data it may not be divided into the element set, but the user experience score for the corresponding user experience score is calculated separately like the audio stream. Because there are many terminal devices that do not support somatosensory output. Therefore, for somatosensory data, when calculating the user experience score, whether to support the corresponding somatosensory output can be used as an independent reference dimension of the user experience score.
  • some matching rules between the UX elements and the receiving end may also be preset by a technician or a third-party partner. For example, it can be set that the smart watch only delivers control UI elements, and the TV only delivers video streams and audio streams.
  • the UX elements of each receiving end can be adaptively matched according to the set matching rules.
  • one-by-one matching can be performed from dimensions such as the size of the UX element, visual requirements, interaction complexity, and frequency of use, and the final matching result between the UX element and the receiving end can be confirmed according to the matching results of each dimension.
  • a neural network model can be pre-trained to adaptively match the appropriate UX elements of the terminal device. At this time, there is no need to calculate the user experience scores of each UX element and the receiving end, and the neural network model can complete the matching by itself.
  • the embodiment of the present application does not limit the number of receiving ends that can be screened by a single UX element. Therefore, for a single UX element in the embodiment of the present application, it may fail to match with all receiving ends, resulting in a situation where it is not projected to the receiving end in the end. It is also possible that the match with one or more receiving ends is successful, and the screen is finally projected to one or more different receiving ends.
  • the number of UX elements delivered by a single receiving end may also be different. For example, in the foregoing example of calculating user experience scores and performing matching, if the adaptive matching rule is set to only set the collective experience score to be greater than or equal to the device experience score, the element set with the highest experience score is determined to be a successful match. At this time, each receiving end will only correspond to a UX element in an element set (assuming that the audio stream fails to match). The adaptive matching rule is set to determine that all element sets whose set experience score is greater than or equal to the device experience score are matched as successful. At this time, each receiving end may correspond to multiple element sets.
  • the connection and networking between the receiving end and the sending end need to be performed.
  • the networking mode of the receiving end and the transmitting end is not excessively limited, and the technical personnel can select or set according to actual needs.
  • a wired connection method such as USB may be used for connection and networking.
  • the networking can also be performed through wireless connection methods such as Bluetooth and WiFi.
  • a wireless access point (Access Point, AP) networking mode can be selected, and the sending end and the receiving end are placed in the same AP device network.
  • the sending end and the receiving end can communicate with each other through the AP device to realize connection networking.
  • AP Access Point
  • a peer-to-peer network (P2P) networking method can be selected, and a device at the sending end and the receiving end is used as a central device to create a wireless network.
  • the central device discovers other non-central devices through Bluetooth broadcasting and scanning, and the non-central devices access the wireless network, and finally form a one-to-many network.
  • the sending end can obtain the device information of the receiving end in an active or passive manner to achieve adaptive matching of UX elements.
  • the active acquisition by the sender means that the sender actively initiates a device information acquisition request to the receiver, and the receiver returns the corresponding device information.
  • Passive acquisition by the sender means that after the connection and networking is completed, the receiver actively sends its own device information to the sender.
  • n user experience design elements selected in S1041, S1042, S1043, or S1044 do not include UI controls.
  • n user experience design elements can be sent to the receiving end, and the receiving end can output them.
  • S105 control layout operations There is no need to perform S105 control layout operations at this time.
  • the audio streams can be sent to the receiving end for playback.
  • the filtered user experience design elements do not contain elements that need to be displayed, such as when only audio streams are included, the receiving end may not perform any display.
  • the smart speaker only performs audio stream output without any content.
  • the embodiment of the present application also does not need to perform related operations of the display interface, for example, does not need to perform operations such as UI control typesetting and video stream synthesis.
  • S105 is executed.
  • S105 The sending end typesets the UI controls in the n user experience design elements according to the device information of the receiving end, to obtain a corresponding control layout file. Determine whether only UI controls are included in the n user experience design elements. If only UI controls are included in the n user experience design elements, S106 is executed. If the n user experience design elements include other elements in addition to UI controls, S107 is executed.
  • some UI controls in the real-time interface may be discarded.
  • the screen conditions of the receiving end and the sending end and the actual scene of the device may be different.
  • the screen size of the receiving end may be smaller than the screen size of the sending end, or the space available in the receiving end screen may be smaller than the screen size of the sending end.
  • the actual application scenario of the car machine is in the user's car, and the requirements for the ease of interaction are relatively high.
  • the embodiment of the present application re-types and layouts the UI controls.
  • the embodiment of the present application does not limit the specific typesetting layout method, which can be selected or set by a technician according to actual needs.
  • UI control typesetting and layout includes:
  • Step 1 Obtain the size information of the real-time interface, as well as the position information and size information of the UI control in the real-time interface. According to the size information of the real-time interface and the location information and size information of the UI control in the real-time interface, the corresponding distribution map of the UI control in the real-time interface is drawn.
  • Step 2 Identify the first typesetting type corresponding to the distribution map.
  • Step 3 Based on the device information, determine the second typesetting type corresponding to the first typesetting type.
  • the layout of the UI controls in the application interface is divided into multiple types in advance (may also be referred to as the layout style of the UI controls).
  • the specific classification rules are not limited here, and can be selected or set by the technicians according to the actual situation. For example, it may include the typesetting type of the upper, middle and lower structure and the typesetting of the left, middle and right structure. Among them, the upper, middle and lower structure will divide the application interface into three areas from top to bottom.
  • the UI controls are distributed in three parts: the upper, the middle and the lower in the application interface as a whole.
  • UI controls in mobile desktops are generally typesetting types with top, middle and bottom structures.
  • the application interface is divided into three areas from left, middle, and right from left to right.
  • the UI controls in the application interface as a whole are distributed according to the three parts of the left, middle and right.
  • the UI control layout will be based on the left, middle and right structure.
  • the layout of the UI controls can also be carried out according to the upper, middle and lower structure.
  • the embodiments of the present application will also select and set suitable one or more types of typesetting for different terminal devices under the condition of facilitating the user's human-computer interaction requirements.
  • the layout type of the general left, middle and bottom structure in the car machine is more suitable.
  • the typesetting type of the left, middle and bottom structure can be set to the typesetting structure corresponding to the car and machine.
  • the relative position information and relative size information of the UI controls in the application interface among these typesetting types.
  • a part of the UI controls can be set in the upper area, and the relative size can be set to half the size of the upper area.
  • a typesetting type mapping relationship is set for each terminal device. That is, for various types of typesetting that may appear in the real-time interface, the type of typesetting after the reorganization of the UI controls corresponding to the terminal device is set. Among them, the reorganized typesetting types are all suitable typesetting types for the terminal device set above.
  • the embodiment of the present application first draws a distribution map of the UI controls in the real-time interface based on the size information of the real-time interface, and the position information and size information of the UI controls in the real-time interface. Since the distribution map records the size and position of the UI controls in the real-time interface, by identifying the distribution map, the actual typesetting type (ie, the first typesetting type) corresponding to the UI controls in the real-time interface can be determined.
  • the corresponding typesetting type mapping relationship of the receiving end is determined according to the device information of the receiving end, and the typesetting mapping relationship is inquired to determine the target typesetting type (ie, the second typesetting type) corresponding to the UI control.
  • the distribution map may be reduced in a constant size ratio. For example, the distribution map with a size of 2240 ⁇ 1080 can be reduced to 56 ⁇ 27. Then proceed to step 2 based on the reduced distribution map.
  • Step 4 Based on the second typesetting type, obtain relative position information and relative size information corresponding to each UI control in the receiving end display interface. And based on the view node data of the real-time interface, and the relative position information and relative size information of each UI control, a corresponding control layout file is generated.
  • the relative position information and relative size information of each UI control in the interface after typesetting and reorganization can be determined.
  • the embodiment of the present application will further obtain more attribute data of each UI control through the view node data of the real-time interface. For example, attribute data such as the rotation angle of the UI control and whether it is operable.
  • attribute data such as the rotation angle of the UI control and whether it is operable.
  • the relative position information, relative size information, and newly acquired attribute data of the UI control are packaged into the corresponding control layout file. Since various data required for UI control layout are already stored in the control layout file, the subsequent receiving end can reorganize the layout of the UI control according to the control layout file, and draw the UI control after the layout reorganization.
  • the target typesetting type only records the overall relative position of the UI control in the application interface.
  • the typesetting type of the left, middle and right structure it only records that the UI controls are distributed in the left, middle and right parts of the application interface.
  • a position conversion rule between typesetting types is preset. For example, it can be set to: when mapping the typesetting type of the upper, middle and lower structure and the typesetting type of the left, middle and right structure, all the UI controls in the upper area are mapped to the left area, and all the UIs in the middle area are mapped. The controls are mapped to the middle part, and all the UI controls in the lower part of the area are mapped to the right part of the area.
  • the filling method of a UI control in each area of the application interface can be preset to determine the accurate relative position information and relative size information of each UI control. For example, in a single area, if there is only one UI control after mapping, the UI control can be filled in the entire area space. At this time, the relative position and relative size of the area in the application interface are the relative position and relative size of the UI control.
  • a technician or a third-party partner may also pre-set the typesetting and layout methods of the screen projection application in various receiving ends. That is, the corresponding control layout file in the receiving end of the UI control is set in advance. At this time, you only need to reorganize the layout of the UI controls directly according to the preset control layout file, and you can draw the reorganized UI controls.
  • the UI control layout of each application to be projected can also be performed independently.
  • the interface is generated after typesetting and reorganization at the receiving end, it can be generated in the space area corresponding to the screen according to the control layout file corresponding to each application to be projected. For example, referring to the example shown in FIG. 2E, it is assumed that there are four applications including a social application, a music player, a navigation map, and a phone application that need to be projected to the receiving end of FIG. 2E. At this time, it is necessary to determine the space area available for each application on the receiving end screen.
  • the typesetting and layout operations of the UI controls of each application can be performed independently. To draw the interface after typesetting and reorganization, you only need to draw in the corresponding space area of the application.
  • the embodiment of the present application does not excessively limit the layout mode of each application on the receiving end screen when multiple applications are projected to the same receiving end. It can be selected or set by technicians according to actual needs.
  • the screen space area of the receiving end may be equally divided into various applications.
  • the sending end obtains drawing instructions and layer data corresponding to the n UI controls, and sends the drawing instructions, layer data, and control layout files of the n UI controls to the receiving end.
  • the operation of S108 is performed by the receiving end.
  • the sending end obtains the drawing instructions and layer data corresponding to the UI controls in the n user experience design elements, and removes the UI control drawing instructions, layer data, and control layout files of these UI controls, and the n user experience design elements.
  • the element data other than the control is sent to the receiving end.
  • the operation of S109 is performed by the receiving end.
  • the selected n UX elements may only contain UI controls, they may also contain UX elements such as video streams, audio streams, and data to be notified at the same time.
  • UX elements such as video streams, audio streams, and data to be notified at the same time.
  • the corresponding display interface can be generated by drawing the UI controls after the layout layout on the receiving end.
  • it is necessary to output other UX elements while drawing the UI controls after the layout For example, play video streams and audio streams, and display push messages.
  • the drawing instruction, layer data, and control layout file of the UI control are sent to the receiving end together.
  • the drawing instruction determines the position and size of each UI control in the application interface of the receiving end according to the control layout file, and draws the outline frame of each UI control. Then draw the content of each UI control according to the layer data. So as to realize the drawing of UI controls after typesetting.
  • the method of obtaining layer data is not limited here, and can be selected or set by the technician. For example, in the Android system, the layer data of the UI control can be obtained through the SurfaceFlinger component.
  • a single UI control may correspond to multiple layers, in order to ensure accurate drawing of the UI control, multiple layers need to be aligned. Therefore, when a single UI control corresponds to multiple layers, the embodiment of the present application will acquire the coordinate relationship between these layers while acquiring the data of these layers. When used, these coordinate relationships will be sent to the receiving end together with the layer data to ensure accurate drawing of the UI controls by the receiving end.
  • UX elements other than UI controls drawing instructions, layer data and control layout files are not required. Therefore, these UX elements can be sent to the receiving end.
  • the receiving end outputs the UX elements of these non-UI controls while drawing the UI controls.
  • S108 The receiving end draws a corresponding UI control on its own screen according to the received UI control drawing instruction, layer data, and control layout file, to obtain a corresponding display interface.
  • the outline frame of each UI control can be drawn.
  • part (a) is the real-time interface of the sender, which contains title control, cover control, singing control, download control, comment control, expansion control, control control (including playback control, next song control) , Previous song control, list control), play mode control and progress bar control and other UI controls.
  • part (b) in Figure 2F can be obtained. There is no content in each UI control at this time.
  • each UI control After the outline frame of each UI control is drawn, the content of each UI control is drawn according to the layer data. So as to realize the drawing of UI controls after typesetting. For example, refer to FIG. 2G, and assume that part (b) is part (b) in the above reference to FIG. 2F.
  • the part (c) is the UI control obtained by drawing the control content based on the layer data on the part (b). At this time, the rendering of the screen projection interface of the receiving end has actually been completed, so that the receiving end can obtain the corresponding display interface.
  • the display interface may be zoomed after the display interface is drawn based on the display interface, and the display interface can be filled into the usable space area on the screen of the receiving end.
  • the usable space area on the screen of the receiving end refers to the projection space area provided by the receiving end for the application to be projected. This area is less than or equal to the actual total size of the receiving end screen, and the specific size needs to be determined by the actual application scenario.
  • the embodiment of the application does not limit the specific display interface filling method too much, and can be selected or set by the technician according to actual needs. For example, it can be set to cover the entire usable space area, or it can be set to keep the proportion of the display interface unchanged.
  • the interface zooms until the display interface has the largest area in the usable space area.
  • S109 The receiving end draws a corresponding UI control on its own screen according to the received drawing instruction, layer data, and control layout file of the UI control. And perform layered and superimposed display of each user experience design element that needs to be displayed. Get the corresponding display interface.
  • the embodiment of the present application adopts layered and superimposed display for each UX element.
  • the superimposition sequence of the video stream, the UI control, and the data to be notified in the embodiment of the present application is: the UI control is superimposed on the upper layer of the video stream, and the data to be notified superimposed on the upper layer of the UI control.
  • the backgrounds of the UI controls and the layers corresponding to the data to be notified are all transparent to ensure that users can view the video stream normally.
  • UX elements that do not need to be displayed such as audio streams, they can be played directly by the receiving end.
  • the navigation map needs to be projected
  • the real-time map in the navigation map is encoded as a video stream
  • the displayable UX elements that need to be projected at this time include video streams and pointer controls, refer to FIG. 2I.
  • the embodiment of the present application superimposes the pointer control on the upper layer of the video stream for display with a transparent background.
  • the SurfaceFlinger component for navigation map applications.
  • One is the layer whose name starts with "SurfaceView-", which corresponds to the interface of the navigation map.
  • the image of this layer is obtained through video.
  • the other layer corresponds to the UI control command options of the navigation map, and the image of this layer is obtained by drawing commands.
  • the UI control instructions and the video are respectively displayed through a SurfaceView (a view in the Android system), and the two SurfaceViews are placed in the same relative layout (RelativeLayout).
  • the SurfaceView of the UI control instruction is on the top, and the background is made transparent, so that the video layer in the lower layer can be displayed.
  • the application to be screened on the sending end is identified and split by UI controls, and appropriate UX elements are adaptively matched according to the actual device conditions of the receiving end.
  • the application view to be screened can be split and combined with different UX elements according to the actual needs of each receiving end.
  • the embodiment of the present application can also typeset and reorganize the UI control according to the actual situation of the receiving end.
  • the embodiment of the present application supports independent or combined screen projection at the minimum UX element level, without displaying all the information of the real-time interface of the sender. Users can realize more flexible content sharing and interaction on the receiving end device, which improves the user experience and efficiency of using terminal device services across devices.
  • the embodiments of the present application deliver real-time dynamic data, and the UX elements in one interface can be distributed among multiple different receiving terminals.
  • the projection function is rich and the flexibility is higher.
  • the embodiments of the present application can adaptively perform UX element selection, layout, and delivery according to the characteristics of the UX elements and the device characteristics of the receiving end.
  • it can also maximize the retention of application brand characteristics, such as visual element styles.
  • the embodiment of the present application can also implement a one-to-many terminal device screen projection operation, and can be compatible with terminal devices of different operating systems and different system versions. Users can freely set the terminal device objects to be delivered according to their actual needs. This makes the way of screen projection more diversified and flexible, which can meet the needs of more practical scenarios.
  • S105 is a typesetting layout of UI controls.
  • S106 and S107 are the transmission of data required to generate the display interface.
  • S108 and S109 mainly do the generation of the display interface of the receiving end.
  • the sending end has completed the operations of S101 to S107 as the execution subject.
  • the layout of UI controls can also be completed by the receiving end.
  • the generation of the display interface at the receiving end can also be completed by the sending end. Therefore, the execution subject of S105, S108, and S109 can theoretically be changed to a certain extent.
  • the several sets of plans that may be obtained after changing the execution subject are explained as follows:
  • Solution 1 After S105, the sending end does not need to send the acquired drawing instructions, control layout files and layer data to the receiving end. Instead, the sender performs the operation of S108 or S109 by itself. At this time, the sending end can transmit the generated display interface to the receiving end in the form of a video stream. For the receiving end, directly decoding and playing the acquired video stream can realize the display of the projected screen content.
  • the receiving end in Solution 1 omits the operations of UX element drawing and display interface generation, and the receiving end only needs to decode and play the video stream.
  • the screen projection technology in the embodiments of the present application can also be compatible with some terminal devices with weak computing capabilities. For example, for some old-model TVs, the function of projection display can also be realized.
  • Solution 2 After screening out n UX elements in S1041, S1042, S1043, and S1044, the sending end sends the data of each UX element to the receiving end.
  • the receiving end completes the S105 operation of the layout and layout of the UI controls. And the receiving end itself completes the operations of drawing the UX elements and generating the display interface.
  • the computing resources of the sending end are often relatively limited. Especially when the sending end casts a screen to multiple terminal devices, the workload is relatively large for the sending end, and the performance impact is more serious. Therefore, in scheme 2, after the sender completes the matching between the UX element and the receiver, it will send the matched UX element to the receiver. The receiving end is responsible for subsequent operations. Through scheme 2, the workload of the sending end can be reduced, and the computing resources of the sending end can be saved. At the same time, due to the operation of video encoding, decoding and playback, it will have a certain impact on the definition of the video. Therefore, compared with the solution 1, the solution 2 and the solution of the embodiment shown in FIG. 2A will have an improved effect on the final display at the receiving end.
  • Scheme 3 Combining the above scheme 1 and scheme 2, when the sending end filters out n UX elements corresponding to a single receiving end.
  • the sending end determines the typesetting layout of the UI controls and the generation of the display interface of the receiving end according to its own computing resource situation and the equipment situation of the receiving end, and which terminal device completes the operation. For example, it can be set as follows: when the sender's own computing resources are sufficient and the receiver's computing resources are insufficient, solution 1 is selected for processing. When the computing resources of the sending end and the receiving end are sufficient, the method of the embodiment shown in FIG. 2A is selected for processing. When the computing resources of the sending end are insufficient but the computing resources of the receiving end are relatively sufficient, solution 2 can be selected for processing.
  • the selection process of the scheme can be independent of each other.
  • each scheme can meet the requirements of certain application scenarios, for example, scheme 1 reduces the software and hardware requirements for the receiving end, and can be compatible with more different types of receiving ends.
  • Solution 2 can save the computing resources of the sender, improve the performance of the device at the sender, and at the same time ensure the clarity of the projection screen.
  • Scheme 3 can be compatible with scheme 1 and scheme 2 above, making the screen projection extremely flexible. Therefore, the embodiments of the present application can achieve better adaptation to various application scenarios, enrich the function of screen projection, and improve the flexibility of screen projection.
  • some interface content may show changes in status.
  • the cover control will rotate continuously.
  • the dynamic picture in the application and its display content will also undergo certain changes.
  • a targeted update operation is performed according to the type of the displayed content. The details are as follows:
  • the essence is that the attribute data of the UI control has changed. For example, changes in the color, size, angle, and transparency of UI controls will cause differences in the display states of UI controls. Therefore, as an optional embodiment of the present application, when the drawing operation of the UI control is completed by the receiving end.
  • the sending end can send the changed attribute data to the receiving end.
  • the receiving end modifies the corresponding UI control state after receiving the attribute data.
  • the sending end can update the display state corresponding to the UI control according to the changed attribute data, and send the updated UI control to the receiving end in a video stream or the like.
  • the sending end will asynchronously generate new picture data to the receiving end when drawing the dynamic picture. After receiving the picture data, the receiving end draws the corresponding picture data.
  • the sending end and the receiving end are both Android devices. Since the sender is drawing a picture, it will generate a DrawBitmap command. The embodiment of the application will capture this instruction, generate a hash code corresponding to the picture (as a unique identifier of the picture) after the capture, and cache it to the sending end. Then the sender sends the DrawBitmap command and the hash code to the receiver.
  • the receiving end After receiving the DrawBitmap command, the receiving end will find the picture resource in its own buffer through the hash code. If the corresponding picture resource is found, it will be drawn directly, so as to update the dynamic picture. If the corresponding image resource is not found, a request will be sent to the sender asynchronously. After receiving the request, the sending end finds out the corresponding image resource according to the hash code and then sends it to the receiving end. At this time, the receiving end buffers and draws the picture resource, and the dynamic picture can be updated. Among them, the receiving end may keep the state of the original dynamic picture unchanged during the period when the picture resource of the receiving end has not been received, or may draw a preset picture to ensure the visual effect of the dynamic picture.
  • the application interface itself may also be switched. For example, when a user is operating a music player, the interface of the music player may be switched, such as switching from a song playing interface to a song list interface.
  • the embodiment of the present application may also refresh the display interface of the receiving end. Assuming that the original real-time interface is the basic interface of the application to be screened, the embodiments of this application are described in detail as follows:
  • the embodiment of the present application does not refresh the display interface of the receiving end.
  • the new application interface is the basic interface. At this time, according to actual needs, you can select any one of the following three strategies for processing: 1. Project the new basic interface to cover the old basic interface. 2. Mirror the new basic interface, but keep the old basic interface. 3. The display interface of the receiving end is not refreshed.
  • corresponding refresh schemes are designed for various refresh situations of the real-time interface of the sender.
  • the receiving end can refresh the screen content in real time according to the status of the real-time interface of the sending end. This makes the screencast content in the embodiments of the present application more real-time and more flexible.
  • the receiving end can counter-control the sending end.
  • Step a The receiving end detects the event type of the operation event, and the first coordinate information corresponding to the operation event on the receiving end screen.
  • Step b The receiving end performs coordinate conversion on the first coordinate information to obtain the second coordinate information of the operation event at the receiving end, and sends the event type and the second coordinate information to the sending end.
  • Step c The sending end executes the corresponding event task according to the second coordinate information and the event type.
  • the conversion relationship will be stored in the receiving end.
  • the receiving end detects an operation event, it will detect the type of the operation event. For example, when a touch screen, remote control, or keyboard operation event is detected, the type of detection used is a click operation or a drag operation.
  • the receiving end will send the event type and the coordinate information on the sending end screen to the sending end.
  • the sender After receiving the event type and coordinate information, the sender will locate the UI control corresponding to the operation event according to the coordinate information. Then simulate the operation event according to the event type, and execute the corresponding event task. For example, suppose that the screen projection application is a music player, the coordinate information positioning is the next song control, and the event type is click. At this time, the sending end will perform the corresponding operation to play the next song.
  • the sending end is an Android system device
  • the event task corresponding to the coordinate information can be simulated on the sending end through the interface InjectInputEvent provided in the Android system.
  • the embodiment of the present application can realize the counter-control of the receiving end to the sending end.
  • the user does not need to touch the sending end during the screen casting process, and can also realize the operation on the sending end. It greatly improves the efficiency of human-computer interaction between the user and the sending end, and at the same time makes the projection function more abundant and more flexible.
  • one sending end can simultaneously cast a screen to multiple receiving ends. It can be known in combination with the counter-control operation of the receiving end to the sending end in the foregoing embodiment.
  • the receiving end A and the receiving end B project the content of the same application, if the display interface of the receiving end A contains UI controls of the control type. At this time, the control UI controls in the receiving end A can be used to control the receiving end B, so as to realize the interactive operation between different receiving ends.
  • the sending end is a mobile phone
  • the receiving end is a TV and a smart watch.
  • UI controls such as video streams and information are projected onto the TV for display.
  • the UI controls of the control class are projected onto the smart watch for display.
  • the user can realize the playback operation of the video in the TV by operating the previous, next, and pause control controls in the smart watch. In this process, the user does not need to operate the mobile phone or TV. The efficiency of human-computer interaction has been greatly improved.
  • the anti-control based on the receiving end realizes the interaction between different receiving ends.
  • the user does not need to operate the sending end or the receiving end that needs to be controlled, and the target receiving end can be controlled only by operating the receiving end that contains the UI controls of the control type. This allows the user to greatly improve the efficiency of human-computer interaction in the actual screen-casting process, and improves the user experience.
  • each terminal device can be independent of each other.
  • the user can still use various services in the sender normally while the sender is casting a screen.
  • the service used does not conflict with the screen-casting application
  • the use of the sender service during the user's screen-casting period does not affect the screen-casting.
  • the sender is a mobile phone
  • the user can still operate applications related to or unrelated to the screen while turning on the screen projection function.
  • the user can still perform operations such as making phone calls and surfing the Internet normally.
  • the embodiments of the present application support counter-control of the receiving end to the sending end.
  • each receiving end can also independently operate the projected application. There is no mutual influence between the receiving ends due to anti-control. The operations between each receiving end are independent of each other.
  • the sending end is a mobile phone
  • the receiving end is a TV and a computer.
  • both the TV and the computer are the past applications of the mobile phone screen.
  • the user can control the screen-casting application in the computer through the mouse. And through the remote control to control the application in the TV.
  • the user while the user normally uses the service in the mobile phone, he can also normally use the screen-casting application in the TV and computer. Therefore, users can view different media content such as documents and pictures on multiple screens at the same time.
  • each receiving end can only passively display content according to the interface state of the sending end, and the function is single and inflexible. Therefore, compared with the existing screen projection technology, the embodiment of the present application can support the simultaneous use of multiple terminal devices without affecting each other. Therefore, the screen projection function of the embodiment of the present application is richer and more flexible.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • first”, second, third, etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance. It should also be understood that although the terms “first”, “second”, etc. are used in the text in some embodiments of the present application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • the first table may be named the second table, and similarly, the second table may be named the first table without departing from the scope of the various described embodiments.
  • the first form and the second form are both forms, but they are not the same form.
  • the screen projection method provided by the embodiments of this application can be applied to mobile phones, tablet computers, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and super mobile personal computers
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • netbooks netbooks
  • PDA personal digital assistant
  • FIG. 5 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 5 of this embodiment includes: at least one processor 50 (only one is shown in FIG. 5), a memory 51, and the memory 51 stores the memory 51 that can run on the processor 50 Computer program 52.
  • the processor 50 executes the computer program 52, the steps in the above embodiments of the screen projection method are implemented, for example, steps 101 to 109 shown in FIG. 2A.
  • the processor 50 executes the computer program 52, the functions of the modules/units in the foregoing device embodiments are realized.
  • the terminal device 5 may be a computing device such as a mobile phone, a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 50 and a memory 51.
  • FIG. 5 is only an example of the terminal device 5, and does not constitute a limitation on the terminal device 5. It may include more or less components than those shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include an input sending device, a network access device, a bus, and the like.
  • the so-called processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the terminal device 5 in some embodiments, such as a hard disk or a memory of the terminal device 5.
  • the memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk equipped on the terminal device 5, a smart memory card (Smart Media Card, SMC), or a Secure Digital (SD). Card, Flash Card, etc.
  • the memory 51 may also include both an internal storage unit of the terminal device 5 and an external storage device.
  • the memory 51 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program.
  • the memory 51 can also be used to temporarily store data that has been sent or will be sent.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the terminal device can realize the steps in the foregoing method embodiments when the terminal device is executed.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (Read-Only Memory, ROM) , Random Access Memory (RAM), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了投屏方法及终端设备,适用于投屏技术领域,该方法包括:响应于投屏指令,获取待投屏应用程序实时界面以及一个或多个接收端的设备信息;根据设备信息对接收端的视觉效果、声音效果和交互复杂度进行评分得到用户体验分;根据用户体验分数从实时界面中获取各个接收端对应的第一待投屏数据;当第一待投屏数据中包含用户界面控件时,获取对用户界面控件的控件布局文件;将第一待投屏数据和控件布局文件发送至对应的接收端,以使得各个接收端根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面。本申请实施例可以实现功能更为丰富,操作更为灵活的投屏方式。

Description

投屏方法及终端设备
本申请要求于2020年02月20日提交国家知识产权局、申请号为202010107285.4、申请名称为“投屏方法及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于投屏技术领域,尤其涉及投屏方法及终端设备。
背景技术
随着科技的进步,用户拥有的终端设备数量日益增多。终端设备之间的投屏分享,已经成为了用户的一种日常需求。
投屏系统中的终端设备可分为发送端和接收端。为了实现终端设备间的投屏,相关技术中发送端会将自身整个屏幕镜像投送到对应的接收端,从而实现对发送端屏幕内容的分享。但屏幕镜像投屏的方式,会将发送端屏幕内的所有内容均进行投屏分享,功能单一且灵活性较低,无法满足用户的个性化需求。
发明内容
有鉴于此,本申请实施例提供了投屏方法及终端设备,可以解决现有投屏技术灵活性低的问题。
本申请实施例的第一方面提供了一种投屏方法,应用于发送端,包括:
接收投屏指令,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息。并根据设备信息从实时界面中获取各个接收端对应的第一待投屏数据。再将第一待投屏数据发送至对应的接收端,以使得各个接收端输出接收到的第一待投屏数据。其中,第一待投屏数据为视频流、音频流和/或用户界面控件。
本申请实施例可以根据接收端的设备信息,从发送端待投屏应用程序的实时界面中获取所需投屏的数据。同时对于单个接收端而言,本申请实施例可以实现单个或多个待投屏数据的灵活选取和投屏,实现了对各个接收端的自适应投屏。使得本申请实施例的投屏方式灵活的更高,可以满足用户的个性化需求。
作为第一方面的一个实施例,应用于发送端,包括:
响应于投屏指令,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息。根据设备信息对各个接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个接收端的用户体验分。根据用户体验分数,从实时界面中获取各个接收端对应的第一待投屏数据,其中,第一待投屏数据包含视频流、音频流和用户界面控件中的至少一种。当第一待投屏数据中包含用户界面控件时,获取对用户界面控件的控件布局文件。将第一待投屏数据和控件布局文件发送至对应的接收端,第一待投屏数据用于接收端进行数据输出,控件布局文件用于接收端生成包含用户界面控件的显示界面。
本申请实施例可以根据接收端的视觉效果、声音效果和交互复杂度三个维度,来实现对接收端的用户体验评估,并可以根据评估出的用户体验分数来实现对待投屏数据的选取。最后还可以实现对用户界面控件的自动布局。因此本申请实施例可以实现 对用户更好的投屏体验,满足用户的个性化需求。
在第一方面的第一种可能的实现方式中,从实时界面中获取各个接收端对应的第一待投屏数据,具体包括:
提取实时界面内包含的第二待投屏数据,并根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据。其中,第二待投屏数据包括视频流、音频流和/或用户界面控件。
在本申请实施例中,根据接收端的设备信息来进行待投屏数据的筛选,可以使得最终投屏的数据内容更适合于接收端的实际设备情况。从而使得最终的投屏内容,可以更适合接收端进行展示,可提升用户的人机交互效率,提高用户体验。
作为第一方面的一个实施例,根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据据,具体包括:
提取实时界面内包含的第二待投屏数据,其中,第二待投屏数据包含视频流、音频流和用户界面控件中的至少一种。根据用户体验分数,从第二待投屏数据中筛选出与各个接收端对应的第一待投屏数据。
在本申请实施例中,根据接收端的设备信息来进行待投屏数据的筛选,并对不同接收端投送不同的待投屏数据,可以使得最终投屏的数据内容更适合于接收端的实际设备情况。从而使得最终的投屏内容,可以更适合接收端进行展示,可提升用户的人机交互效率,提高用户体验。
在第一种可能实现方式的基础上,作为本申请第一方面的一种可能实现方式,设备信息包括:显示屏幕尺寸以及音频输出的失真度和频率响应。
根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据的操作,具体包括:
对各个第二待投屏数据进行处理,得到对应的数据交互分数。基于用户体验分数对各个第二待投屏数据的数据交互分数进行匹配,得到各个接收端对应的第一待投屏数据。
或者根据各个接收端的显示屏幕尺寸以及音频输出的失真度和频率响应,计算对应的用户体验分数。并对各个第二待投屏数据进行处理,得到对应的数据交互分数。再基于用户体验分数对各个第二待投屏数据的数据交互分数进行匹配,得到各个接收端对应的第一待投屏数据。
在本申请实施例中,通过接收端屏幕尺寸和音频输出质量来进行待投屏数据的匹配,保障了最终在接收端的用户体验。
在第一方面的第二种可能的实现方式中,包括:
在将第一待投屏数据发送至对应的接收端之前,若第一待投屏数据中包含用户界面控件,则基于设备信息先对第一待投屏数据中的用户界面控件进行排版处理,并得到对应的控件布局文件。
对应的,在将第一待投屏数据发送至对应的接收端,以使得各个所述接收端输出接收到的所述第一待投屏数据的操作中,具体包括:
将第一待投屏数据和控件布局文件发送至对应的接收端,以使得接收端根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面。
在本申请实施例中,通过对待投屏数据中的用户界面控件进行排版,可以保障用户界面控件在接收端显示界面中的显示效果。使得用户在接收端人机交互的效率得到保障。同时,基于用户界面控件的拆分和排版重组,也可以最大化的保留应用品牌特征。便于用户快速熟悉接收端的显示界面,进而使得用户在接收端的人机交互操作更加便捷。
在第二种可能实现方式的基础上,在第一方面的第三种可能的实现方式中,包括:
在将控件布局文件发送至对应的接收端之前,获取第一待投屏数据中的各个用户界面控件对应的绘制指令和图层数据,绘制指令用于使得接收端绘制用户界面控件。
对应的,将第一待投屏数据和控件布局文件发送至对应的接收端,以使得接收端根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面的操作,包括:
将绘制指令、图层数据和控件布局文件发送至对应的接收端,以使得接收端根据绘制指令、图层数据和控件布局文件,绘制对应的用户界面控件。
在本申请实施例中,通过将绘制指令和图层数据发送至接收端,使得接收端可以实现对用户界面控件的精准绘制,保障了最终生成的用户界面控件的精准可靠。
在第一种可能实现方式至第三种可能实现方式中的任意一种实现方式的基础上,在第一方面的第四种可能的实现方式中,包括:
在获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息的操作之前,还包括:获取用户输入的选取指令,并根据选取指令获取一个或多个待投屏应用程序。
在本申请实施例中,用户可以根据自己的实际需求来灵活选取投屏的应用程序,使得投屏功能更加丰富灵活。
在第一种可能实现方式的基础上,在第一方面的第五种可能的实现方式中,提取实时界面内包含的第二待投屏数据的操作,具体包括:
获取待投屏应用程序的实时界面,并识别实时界面是否为预设界面。
若实时界面为预设界面,提取实时界面内包含第二待投屏数据。
在本申请实施例中,仅当实时界面为预设的基础界面时才进行上述的待投屏数据提取操作。避免了过多界面显示对用户正常使用投屏功能造成影响。
在第一种可能实现方式的基础上,在第一方面的第六种可能的实现方式中,在根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据的操作中,对单个接收端的筛选操作,包括:
将第二待投屏数据中的视频流和用户界面控件划分为一个或多个待投屏数据集,其中,每个待投屏数据集均不为空集,且各个待投屏数据集之间不存在交集。基于该接收端的设备信息,对各个待投屏数据集进行匹配,并将匹配成功的待投屏数据集中包含的第二待投屏数据,作为该接收端对应的第一待投屏数据。
在本申请实施例中,先对所需显示的待投屏数据进行不同方式的组合。在此基础上再进行接收端设备信息匹配。可以实现对各种不同接收端设备的显示数据自适应匹配,使得本申请实施例对接收端的兼容性更强。同时,用于每个待投屏数据集中包含的数据内容都是可以预先设定的,因此使得本申请实施例的投屏效果更为丰富,功能更为灵活。
在第二种可能实现方式或第三种可能实现方式的基础上,在第一方面的第七种可 能的实现方式中,对用户界面控件的排版操作,具体包括:
获取实时界面的尺寸信息,以及第二待投屏数据中用户界面控件在实时界面中的位置信息和尺寸信息,并根据实时界面的尺寸信息和用户界面控件在实时界面中的位置信息和尺寸信息,绘制用户界面控件在实时界面中对应的分布图。
识别分布图对应的第一排版类型。基于设备信息,确定第一排版类型对应的第二排版类型。
基于第二排版类型,获取第二待投屏数据中各个用户界面控件在显示界面中对应的相对位置信息和相对尺寸信息,并基于相对位置信息和相对尺寸信息,生成控件布局文件。
在本申请实施例中,通过预先对各类接收端选取适宜人机交互的用户界面控件布局风格对应关系。再根据实时界面中的用户界面控件布局风格,确定出接收端适宜的布局风格。最后更加接收端适宜的布局风格生成对应的控件布局文件。从而保障了最终在接收端显示界面中用户界面控件的呈现方式,可以更加适应不同接收端对人机交互的需求。使得人机交互的效率得到提高。
在第一种可能实现方式至第七种可能实现方式中的任意一种实现方式的基础上,在第一方面的第八种可能的实现方式中,包括:
接收接收端发送的第二坐标信息和事件类型,并根据第二坐标信息和事件类型,执行对应的事件任务,其中,事件类型是由接收端在检测到操作事件后,对操作事件进行类型识别得到,第二坐标信息,是由接收端在获取到操作事件在接收端屏幕中的第一坐标信息后,对第一坐标信息进行坐标转换处理后得到。
本申请实施例通过接收端对坐标信息的转换和事件类型的识别,发送端根据转换后坐标信息和事件类型执行对应的事件任务,实现了接收端对发送端的反控。用户在投屏过程中无需接触发送端,也可以实现对发送端的操作。极大地提升了用户与发送端的人机交互效率,同时使得投屏功能更为丰富且灵活性更强。
在第一种可能实现方式的基础上,在第一方面的第九种可能的实现方式中,对第一待投屏数据的匹配操作,具体包括:
将视频流和实时界面内包含的用户界面控件划分为一个或多个数据集,其中每个数据集均包含视频流或者至少一个用户界面控件,且各个数据集中不存在交集。
对各个数据集进行用户体验分数计算,得到对应的集合体验分。对接收端的设备信息进行用户体验分数计算,得到对应的设备体验分。并对集合体验服和设备体验分进行匹配,确定出接收端对应的m个数据集。其中,m为自然数。
对音频流进行用户体验分数计算,得到对应的音频体验分。对音频体验服和设备体验分进行匹配。若匹配成功,则判定音频流为待投屏应用对应的第一待投屏数据。
在本申请实施例中,先对所需显示的各个待投屏数据进行数据集划分。再根据各个数据集对用户体验的影响情况,以及各个接收端对用户体验的影响情况,来筛选出各个接收端对应的数据集。使得最终在每个接收端展示的待投屏数据,均是对用户体验较佳的数据,进而保障了最终的用户体验效果。
在第一种可能实现方式的基础上,在第一方面的第十种可能的实现方式中,第一待投屏数据还包括:待通知数据和/或体感数据。
本申请实施例可以支持更多种类用户体验设计元素的投屏,使得投屏的效果更佳,功能丰富度提升且更为灵活。
在第一种可能实现方式的基础上,在第一方面的第十一种可能的实现方式中,包括:
若待投屏应用程序的实时界面中,已投屏用户界面控件的显示状态发生变化。发送端获取该用户界面控件发生变化的属性数据,并将发生变化的属性数据发生至对应的接收端。以使得接收端根据接收到的属性数据,更新显示界面中用户界面控件的显示状态。
本申请实施例可以实现对用户界面控件显示状态的实时更新,保障了投屏的实时性。
在第一种可能实现方式的基础上,在第一方面的第十二种可能的实现方式中,包括:
若待投屏应用程序的实时界面中,已投屏动态图片的显示状态发生变化。发送端获取动态图片对应的新的图片数据,并发送给对应的接收端。以使得接收端在接收端图片数据后,对对应的动态图片进行显示状态更新。
本申请实施例可以实现对动态图片显示状态的实时更新,保障了投屏的实时性。
本申请实施例的第二方面提供了一种投屏方法,应用于发送端,包括:
接收投屏指令,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息。根据设备信息,从实时界面中获取各个接收端对应的第一待投屏数据。若第一待投屏数据中包含用户界面控件和/或视频流,再基于第一待投屏数据生成各个接收端对应的显示界面,并对显示界面进行视频编码,得到对应的实时视频流。最后将实时视频流发送至对应的接收端,以使得接收端解码播放接收到的实时视频流。
本申请实施例可以根据接收端的设备信息,从发送端待投屏应用程序的实时界面中获取所需投屏的数据。同时对于单个接收端而言,本申请实施例可以实现单个或多个待投屏数据的灵活选取和投屏,实现了对各个接收端的自适应投屏。使得本申请实施例的投屏方式灵活的更高,可以满足用户的个性化需求。另外,通过发送端将所有待投屏数据进行实时视频合成传输,降低了对接收端的软硬件需求,使得对接收端的兼容性更强。
在第二方面的第一种可能的实现方式中,从实时界面中获取各个接收端对应的第一待投屏数据,具体包括:
提取实时界面内包含的第二待投屏数据,并根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据。其中,第二待投屏数据包括视频流、音频流和/或用户界面控件。
在本申请实施例中,根据接收端的设备信息来进行待投屏数据的筛选,可以使得最终投屏的数据内容更适合于接收端的实际设备情况。从而使得最终的投屏内容,可以更适合接收端进行展示,可提升用户的人机交互效率,提高用户体验。
在第一种可能实现方式的基础上,作为本申请第二方面的一种可能实现方式,设备信息包括:显示屏幕尺寸以及音频输出的失真度和频率响应。
根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据的操 作,具体包括:
根据各个接收端的显示屏幕尺寸以及音频输出的失真度和频率响应,计算对应的用户体验分数。并对各个第二待投屏数据进行处理,得到对应的数据交互分数。再基于用户体验分数对各个第二待投屏数据的数据交互分数进行匹配,得到各个接收端对应的第一待投屏数据。
在本申请实施例中,通过接收端屏幕尺寸和音频输出质量来进行待投屏数据的匹配,保障了最终在接收端的用户体验。
在第二方面的第二种可能的实现方式中,基于第一待投屏数据生成各个接收端对应的显示界面,具体包括:
若所述第一待投屏数据中包含用户界面控件,基于设备信息先对第一待投屏数据中的用户界面控件进行排版处理,并得到对应的控件布局文件。
根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面。
在本申请实施例中,通过对待投屏数据中的用户界面控件进行排版,可以保障用户界面控件在接收端显示界面中的显示效果。使得用户在接收端人机交互的效率得到保障。同时,基于用户界面控件的拆分和排版重组,也可以最大化的保留应用品牌特征。便于用户快速熟悉接收端的显示界面,进而使得用户在接收端的人机交互操作更加便捷。
在第二种可能实现方式的基础上,在第二方面的第三种可能的实现方式中,基于第一待投屏数据生成各个接收端对应的显示界面,包括:
获取第一待投屏数据中的各个用户界面控件对应的绘制指令和图层数据,绘制指令用于使得接收端绘制用户界面控件。
对应的,根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面,包括:
根据绘制指令、图层数据和控件布局文件,绘制对应的用户界面控件,并基于绘制出的用户界面控件生成显示界面。
在本申请实施例中,通过获取绘制指令和图层数据,使得发送端可以实现对用户界面控件的精准绘制,保障了最终生成的用户界面控件的精准可靠。
在第一种可能实现方式至第三种可能实现方式中的任意一种实现方式的基础上,在第二方面的第四种可能的实现方式中,包括:
在获取待投屏应用程序的实时界面的操作之前,还包括:获取用户输入的选取指令,并根据选取指令获取一个或多个待投屏应用程序。
在本申请实施例中,用户可以根据自己的实际需求来灵活选取投屏的应用程序,使得投屏功能更加丰富灵活。
在第一种可能实现方式的基础上,在第二方面的第五种可能的实现方式中,提取实时界面内包含的第二待投屏数据的操作,具体包括:
获取待投屏应用程序的实时界面,并识别实时界面是否为预设界面。
若实时界面为预设界面,提取实时界面内包含第二待投屏数据。
在本申请实施例中,仅当实时界面为预设的基础界面时,才进行上述的待投屏数据提取操作。避免了过多界面显示对用户正常使用投屏功能造成影响。
在第一种可能实现方式的基础上,在第二方面的第六种可能的实现方式中,在根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据的操作中,对单个接收端的筛选操作,包括:
将第二待投屏数据中的视频流和用户界面控件划分为一个或多个待投屏数据集,其中,每个待投屏数据集均不为空集,且各个待投屏数据集之间不存在交集。基于该接收端的设备信息,对各个待投屏数据集进行匹配,并将匹配成功的待投屏数据集中包含的第二待投屏数据,作为该接收端对应的第一待投屏数据。
在本申请实施例中,先对所需显示的待投屏数据进行不同方式的组合。在此基础上再进行接收端设备信息匹配。可以实现对各种不同接收端设备的显示数据自适应匹配,使得本申请实施例对接收端的兼容性更强。同时,用于每个待投屏数据集中包含的数据内容都是可以预先设定的,因此使得本申请实施例的投屏效果更为丰富,功能更为灵活。
在第二种可能实现方式或第三种可能实现方式的基础上,在第二方面的第七种可能的实现方式中,对用户界面控件的排版操作,具体包括:
获取实时界面的尺寸信息,以及第二待投屏数据中用户界面控件在实时界面中的位置信息和尺寸信息,并根据实时界面的尺寸信息和用户界面控件在实时界面中的位置信息和尺寸信息,绘制用户界面控件在实时界面中对应的分布图。
识别分布图对应的第一排版类型。基于设备信息,确定第一排版类型对应的第二排版类型。
基于第二排版类型,获取第二待投屏数据中各个用户界面控件在显示界面中对应的相对位置信息和相对尺寸信息,并基于相对位置信息和相对尺寸信息,生成控件布局文件。
在本申请实施例中,通过预先对各类接收端选取适宜人机交互的用户界面控件布局风格对应关系。再根据实时界面中的用户界面控件布局风格,确定出接收端适宜的布局风格。最后更加接收端适宜的布局风格生成对应的控件布局文件。从而保障了最终在接收端显示界面中用户界面控件的呈现方式,可以更加适应不同接收端对人机交互的需求。使得人机交互的效率得到提高。
在第一种可能实现方式至第七种可能实现方式中的任意一种实现方式的基础上,在第二方面的第八种可能的实现方式中,包括:
接收接收端发送的第二坐标信息和事件类型,并根据第二坐标信息和事件类型,执行对应的事件任务,其中,事件类型是由接收端在检测到操作事件后,对操作事件进行类型识别得到,第二坐标信息,是由接收端在获取到操作事件在接收端屏幕中的第一坐标信息后,对第一坐标信息进行坐标转换处理后得到。
本申请实施例通过接收端对坐标信息的转换和事件类型的识别,发送端根据转换后坐标信息和事件类型执行对应的事件任务,实现了接收端对发送端的反控。用户在投屏过程中无需接触发送端,也可以实现对发送端的操作。极大地提升了用户与发送端的人机交互效率,同时使得投屏功能更为丰富且灵活性更强。
在第一种可能实现方式的基础上,在第二方面的第九种可能的实现方式中,对第一待投屏数据的匹配操作,具体包括:
将视频流和实时界面内包含的用户界面控件划分为一个或多个数据集,其中每个数据集均包含视频流或者至少一个用户界面控件,且各个数据集中不存在交集。
对各个数据集进行用户体验分数计算,得到对应的集合体验分。对接收端的设备信息进行用户体验分数计算,得到对应的设备体验分。并对集合体验服和设备体验分进行匹配,确定出接收端对应的m个数据集。其中,m为自然数。
对音频流进行用户体验分数计算,得到对应的音频体验分。对音频体验服和设备体验分进行匹配。若匹配成功,则判定音频流为待投屏应用对应的第一待投屏数据。
在本申请实施例中,先对所需显示的各个待投屏数据进行集合划分。再根据各个数据集对用户体验的影响情况,以及各个接收端对用户体验的影响情况,来筛选出各个接收端对应的数据集。使得最终在每个接收端展示的待投屏数据,均是对用户体验较佳的数据,进而保障了最终的用户体验效果。
在第一种可能实现方式的基础上,在第二方面的第十种可能的实现方式中,第一待投屏数据还包括:待通知数据和/或体感数据。
本申请实施例可以支持更多种类用户体验设计元素的投屏,使得投屏的效果更佳,功能丰富度提升且更为灵活。
在第一种可能实现方式的基础上,在第二方面的第十一种可能的实现方式中,包括:
若待投屏应用程序的实时界面中,已投屏用户界面控件的显示状态发生变化。发送端获取该用户界面控件发生变化的属性数据,并根据发生变化的属性数据更新对应的实时视频流。
本申请实施例可以实现对用户界面控件显示状态的实时更新,保障了投屏的实时性。
在第一种可能实现方式的基础上,在第二方面的第十二种可能的实现方式中,包括:
若待投屏应用程序的实时界面中,已投屏动态图片的显示状态发生变化。发送端获取动态图片对应的新的图片数据,并根据获取到的图片数据更新对应的实时视频流。本申请实施例可以实现对动态图片显示状态的实时更新,保障了投屏的实时性。
本申请实施例的第三方面提供了一种投屏方法,应用于接收端,包括:
接收发送端发送的第一待投屏数据,第一待投屏数据,是发送端在获取到待投屏应用程序的实时界面,并提取出实时界面内包含的第二待投屏数据后,发送端根据接收端的设备信息从第二待投屏数据中筛选得到的,其中,第二待投屏数据包括视频流、音频流和/或用户界面控件。
基于设备信息对第一待投屏数据中的用户界面控件进行排版处理,得到对应的控件布局文件。
根据第一待投屏数据和得到的控件布局文件,生成对应的显示界面。
本申请实施例中,接收端在接收到发送端筛选出的待投屏数据之后,会根据自身的设备信息来对接收端的用户界面控件进行排版。再根据排版结果来生成对应的显示界面。使得本申请实施例可以保障用户界面控件在接收端显示界面中的显示效果。使得用户在接收端人机交互的效率得到保障。同时,基于用户界面控件的拆分和排版重 组,也可以最大化的保留应用品牌特征。便于用户快速熟悉接收端的显示界面,进而使得用户在接收端的人机交互操作更加便捷。
在第三方面的第一种可能的实现方式中,对用户界面控件的排版操作,具体包括:
获取实时界面的尺寸信息,以及第二待投屏数据中用户界面控件在实时界面中的位置信息和尺寸信息,并根据实时界面的尺寸信息和用户界面控件在实时界面中的位置信息和尺寸信息,绘制用户界面控件在实时界面中对应的分布图。
识别分布图对应的第一排版类型。基于设备信息,确定第一排版类型对应的第二排版类型。
基于第二排版类型,获取第二待投屏数据中各个用户界面控件在显示界面中对应的相对位置信息和相对尺寸信息,并基于相对位置信息和相对尺寸信息,生成控件布局文件。
在本申请实施例中,通过预先对各类接收端选取适宜人机交互的用户界面控件布局风格对应关系。再根据实时界面中的用户界面控件布局风格,确定出接收端适宜的布局风格。最后更加接收端适宜的布局风格生成对应的控件布局文件。从而保障了最终在接收端显示界面中用户界面控件的呈现方式,可以更加适应不同接收端对人机交互的需求。使得人机交互的效率得到提高。
本申请实施例的第四方面提供了一种投屏系统,包括:发送端和一个或多个接收端。
发送端接收投屏指令,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息,并根据设备信息从实时界面中获取各个接收端对应的第一待投屏数据,其中,第一待投屏数据为视频流、音频流和/或用户界面控件。
发送端将第一待投屏数据发送至对应的接收端。
接收端输出接收到的第一待投屏数据。
本申请实施例可以根据接收端的设备信息,从发送端待投屏应用程序的实时界面中获取所需投屏的数据。同时对于单个接收端而言,本申请实施例可以实现单个或多个待投屏数据的灵活选取和投屏,实现了对各个接收端的自适应投屏。使得本申请实施例的投屏方式灵活的更高,可以满足用户的个性化需求。
在第四方面的第一种可能的实现方式中,发送端从实时界面中获取各个接收端对应的第一待投屏数据,具体包括:
提取实时界面内包含的第二待投屏数据,并根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据。其中,第二待投屏数据包括视频流、音频流和/或用户界面控件。
在本申请实施例中,发送端根据接收端的设备信息来进行待投屏数据的筛选,可以使得最终投屏的数据内容更适合于接收端的实际设备情况。从而使得最终的投屏内容,可以更适合接收端进行展示,可提升用户的人机交互效率,提高用户体验。
在第一种可能实现方式的基础上,作为本申请第四方面的一种可能实现方式,设备信息包括:显示屏幕尺寸以及音频输出的失真度和频率响应。
发送端根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据的操作,具体包括:
根据各个接收端的显示屏幕尺寸以及音频输出的失真度和频率响应,计算对应的用户体验分数。并对各个第二待投屏数据进行处理,得到对应的数据交互分数。再基于用户体验分数对各个第二待投屏数据的数据交互分数进行匹配,得到各个接收端对应的第一待投屏数据。
在本申请实施例中,通过接收端屏幕尺寸和音频输出质量来进行待投屏数据的匹配,保障了最终在接收端的用户体验。
在第四方面的第二种可能的实现方式中,包括:
接收端在将第一待投屏数据发送至对应的接收端之前,基于设备信息先对第一待投屏数据中的用户界面控件进行排版处理,并得到对应的控件布局文件。
对应的,发送端在将第一待投屏数据发送至对应的接收端的操作中,具体包括:
发送端将第一待投屏数据和控件布局文件发送至对应的接收端。
对应的,接收端生成显示界面的操作,具体包括:
接收端根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面。
在本申请实施例中,通过对待投屏数据中的用户界面控件进行排版,可以保障用户界面控件在接收端显示界面中的显示效果。使得用户在接收端人机交互的效率得到保障。同时,基于用户界面控件的拆分和排版重组,也可以最大化的保留应用品牌特征。便于用户快速熟悉接收端的显示界面,进而使得用户在接收端的人机交互操作更加便捷。
在第四方面的第三种可能的实现方式中,接收端根据接收到的第一待投屏数据,生成对应的显示界面的操作,具体包括:
接收端基于自身设备信息对第一待投屏数据中的用户界面控件进行排版处理,得到对应的控件布局文件。
接收端根据第一待投屏数据和得到的控件布局文件,生成对应的显示界面。
本申请实施例中,接收端在接收到发送端筛选出的待投屏数据之后,会根据自身的设备信息来对接收端的用户界面控件进行排版。再根据排版结果来生成对应的显示界面。使得本申请实施例可以保障用户界面控件在接收端显示界面中的显示效果。使得用户在接收端人机交互的效率得到保障。同时,基于用户界面控件的拆分和排版重组,也可以最大化的保留应用品牌特征。
在第二种可能实现方式的基础上,在第四方面的第四种可能的实现方式中,包括:
发送端在将控件布局文件发送至对应的接收端之前,发送端获取第一待投屏数据中的各个用户界面控件对应的绘制指令和图层数据,绘制指令用于使得接收端绘制用户界面控件。
对应的,发送端将第一待投屏数据和控件布局文件发送至对应的接收端,以使得接收端根据接收到的第一待投屏数据和控件布局文件,生成对应的显示界面的操作,包括:
发送端将绘制指令、图层数据和控件布局文件发送至对应的接收端。
对应的,接收端生成显示界面的操作,具体包括:
接收端根据绘制指令、图层数据和控件布局文件,绘制对应的用户界面控件,并基于绘制好的用户界面控件,生成显示界面。
在本申请实施例中,通过将绘制指令和图层数据发送至接收端,使得接收端可以实现对用户界面控件的精准绘制,保障了最终生成的用户界面控件的精准可靠。
在第一种可能实现方式至第四种可能实现方式中的任意一种实现方式的基础上,在第四方面的第五种可能的实现方式中,包括:
发送端在获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息的操作之前,还包括:获取用户输入的选取指令,并根据选取指令获取一个或多个待投屏应用程序。
在本申请实施例中,用户可以根据自己的实际需求来灵活选取投屏的应用程序,使得投屏功能更加丰富灵活。
在第一种可能实现方式的基础上,在第四方面的第六种可能的实现方式中,发送端提取实时界面内包含的第二待投屏数据的操作,具体包括:
发送端获取待投屏应用程序的实时界面,并识别实时界面是否为预设界面。
若实时界面为预设界面,发送端提取实时界面内包含第二待投屏数据。
在本申请实施例中,仅当实时界面为预设的基础界面时,才进行上述的待投屏数据提取操作。避免了过多界面显示对用户正常使用投屏功能造成影响。
在第一种可能实现方式的基础上,在第四方面的第七种可能的实现方式中,在发送端根据设备信息从第二待投屏数据中筛选出各个接收端对应的第一待投屏数据的操作中,对单个接收端的筛选操作,包括:
发送端将第二待投屏数据中的视频流和用户界面控件划分为一个或多个待投屏数据集,其中,每个待投屏数据集均不为空集,且各个待投屏数据集之间不存在交集。
发送端基于该接收端的设备信息,对各个待投屏数据集进行匹配,并将匹配成功的待投屏数据集中包含的第二待投屏数据,作为该接收端对应的第一待投屏数据。
在本申请实施例中,先对所需显示的待投屏数据进行不同方式的组合。在此基础上再进行接收端设备信息匹配。可以实现对各种不同接收端设备的显示数据自适应匹配,使得本申请实施例对接收端的兼容性更强。同时,用于每个待投屏数据集中包含的数据内容都是可以预先设定的,因此使得本申请实施例的投屏效果更为丰富,功能更为灵活。
在第二种可能实现方式或第三种可能实现方式的基础上,在第四方面的第八种可能的实现方式中,发送端或接收端对用户界面控件的排版操作,具体包括:
获取实时界面的尺寸信息,以及第二待投屏数据中用户界面控件在实时界面中的位置信息和尺寸信息,并根据实时界面的尺寸信息和用户界面控件在实时界面中的位置信息和尺寸信息,绘制用户界面控件在实时界面中对应的分布图。
识别分布图对应的第一排版类型。基于设备信息,确定第一排版类型对应的第二排版类型。
基于第二排版类型,获取第二待投屏数据中各个用户界面控件在显示界面中对应的相对位置信息和相对尺寸信息,并基于相对位置信息和相对尺寸信息,生成控件布局文件。
在本申请实施例中,通过预先对各类接收端选取适宜人机交互的用户界面控件布局风格对应关系。再根据实时界面中的用户界面控件布局风格,确定出接收端适宜的 布局风格。最后更加接收端适宜的布局风格生成对应的控件布局文件。从而保障了最终在接收端显示界面中用户界面控件的呈现方式,可以更加适应不同接收端对人机交互的需求。使得人机交互的效率得到提高。
在第一种可能实现方式至第七种可能实现方式中的任意一种实现方式的基础上,在第四方面的第九种可能的实现方式中,包括:
接收端检测到操作事件后识别操作事件的事件类型,并获取操作事件在接收端屏幕中的第一坐标信息。
接收端对第一坐标信息进行坐标转换处理,得到对应的第二坐标信息。
接收端将第二坐标信息和事件类型发送至发送端。
发送端接收第二坐标信息和事件类型,并根据第二坐标信息和事件类型,执行对应的事件任务。
本申请实施例通过接收端对坐标信息的转换和事件类型的识别,发送端根据转换后坐标信息和事件类型执行对应的事件任务,实现了接收端对发送端的反控。用户在投屏过程中无需接触发送端,也可以实现对发送端的操作。极大地提升了用户与发送端的人机交互效率,同时使得投屏功能更为丰富且灵活性更强。
在第一种可能实现方式的基础上,在第四方面的第十种可能的实现方式中,发送端对第一待投屏数据的匹配操作,具体包括:
发送端将视频流和实时界面内包含的用户界面控件划分为一个或多个数据集,其中每个数据集均包含视频流或者至少一个用户界面控件,且各个数据集中不存在交集。
发送端对各个数据集进行用户体验分数计算,得到对应的集合体验分。对接收端的设备信息进行用户体验分数计算,得到对应的设备体验分。并对集合体验服和设备体验分进行匹配,确定出接收端对应的m个数据集。其中,m为自然数。
发送端对音频流进行用户体验分数计算,得到对应的音频体验分。对音频体验服和设备体验分进行匹配。若匹配成功,则判定音频流为待投屏应用对应的第一待投屏数据。
在本申请实施例中,先对所需显示的各个待投屏数据进行集合划分。再根据各个数据集对用户体验的影响情况,以及各个接收端对用户体验的影响情况,来筛选出各个接收端对应的数据集。使得最终在每个接收端展示的待投屏数据,均是对用户体验较佳的数据,进而保障了最终的用户体验效果。
在第一种可能实现方式的基础上,在第四方面的第十一种可能的实现方式中,第一待投屏数据还包括:待通知数据和/或体感数据。
本申请实施例可以支持更多种类用户体验设计数据的投屏,使得投屏的效果更佳,功能丰富度提升且更为灵活。
在第一种可能实现方式的基础上,在第四方面的第十二种可能的实现方式中,包括:
若待投屏应用程序的实时界面中,已投屏用户界面控件的显示状态发生变化。发送端获取该用户界面控件发生变化的属性数据,并将发生变化的属性数据发生至对应的接收端。
接收端根据接收到的属性数据,更新显示界面中用户界面控件的显示状态。
本申请实施例可以实现对用户界面控件显示状态的实时更新,保障了投屏的实时性。
在第一种可能实现方式的基础上,在第四方面的第十三种可能的实现方式中,包括:
若待投屏应用程序的实时界面中,已投屏动态图片的显示状态发生变化。发送端获取动态图片对应的新的图片数据,并发送给对应的接收端。
接收端在接收端图片数据后,对对应的动态图片进行显示状态更新。
本申请实施例可以实现对动态图片显示状态的实时更新,保障了投屏的实时性。
第四方面是与上述第一方面和第三方面对应的系统方案,因此第四方面的有益效果可以参见上述第一方面和第三方面中的相关描述,在此不再赘述。
本申请实施例的第五方面提供了一种投屏装置,包括:
数据获取模块,用于在获取到投屏指令时,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息,并根据设备信息,从实时界面中获取各个接收端对应的第一待投屏数据,其中,第一待投屏数据为视频流、音频流和/或用户界面控件。
数据投送模块,用于将第一待投屏数据发送至对应的接收端,以使得各个接收端根据接收到的第一待投屏数据,生成对应的显示界面。
第五方面是与上述第一方面对应的装置方案,因此第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
本申请实施例的第六方面提供了一种投屏装置,包括:
数据获取模块,用于在获取到投屏指令时,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息,并根据设备信息,从实时界面中获取各个接收端对应的第一待投屏数据,其中,第一待投屏数据为视频流、音频流和/或用户界面控件。
界面生成模块,用于基于所述第一待投屏数据生成各个所述接收端对应的显示界面,并对所述显示界面进行视频编码,得到对应的实时视频流。
视频发送模块,用于将所述实时视频流发送至对应的所述接收端,以使得所述接收端解码播放接收到的所述实时视频流。
第六方面是与上述第二方面对应的装置方案,因此第六方面的有益效果可以参见上述第二方面中的相关描述,在此不再赘述。
本申请实施例的第七方面提供了一种终端设备,所述终端设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,使得终端设备实现如上述第一方面中任一项所述投屏方法的步骤,或者如上述第二方面中任一项所述投屏方法的步骤,或者如上述第三方面中任一项所述投屏方法的步骤。
本申请实施例的第八方面提供了一种计算机可读存储介质,包括:存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,使得终端设备实现如上述第一方面中任一项所述投屏方法的步骤,或者如上述第二方面中任一项所述投屏方法的步骤,或者如上述第三方面中任一项所述投屏方法的步骤。
本申请实施例的第九方面提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述投屏方法,或者如上述 第二方面中任一项所述投屏方法的步骤,或者如上述第三方面中任一项所述投屏方法的步骤。
可以理解的是,上述第七方面至第九方面的有益效果可以参见上述第一方面、第二方面或者第三方面中的相关描述,在此不再赘述。
附图说明
图1A是本申请一实施例提供的手机结构示意图;
图1B是本申请一实施例提供的终端设备的软件结构框图;
图2A是本申请一实施例提供的投屏方法的流程示意图;
图2B是本申请一实施例提供的应用场景示意图;
图2C是本申请一实施例提供的应用场景示意图;
图2D是本申请一实施例提供的应用场景示意图;
图2E是本申请一实施例提供的应用场景示意图;
图2F是本申请一实施例提供的应用场景示意图;
图2G是本申请一实施例提供的应用场景示意图;
图2H是本申请一实施例提供的应用场景示意图;
图2I是本申请一实施例提供的应用场景示意图;
图3是本申请一实施例提供的应用场景示意图;
图4是本申请一实施例提供的应用场景示意图;
图5是本申请实施例提供的终端设备的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了便于理解本申请,此处先对本申请实施例进行简要说明:
随着科技的发展用户拥有终端设备数的增多,用户对终端设备之间投屏分享的需求也日益增长。为了实现终端设备之间内的投屏,一般可以通过以下几种方式实现:
1、屏幕镜像投屏方式。即发送端将自身屏幕内的所有内容,镜像投送到接收端进行显示。
2、基于文件的跨终端设备投送。即发送端将图片、音频、视频和文档等文件,通过视频编码并投送至接收端,再由接收端对视频进行解码播放。
3、对于接收端是车机的情况。则是由终端设备厂商、汽车厂商和第三方应用开发者,预先在车机中设计好与终端设备系统对应的界面系统。用户在使用时将终端设备与车机连接,即可通过车机中的界面系统来实现与终端设备的交互,进而实现对终端设备的投屏。
对于实现方式1,屏幕镜像投屏会将发送端屏幕内所有的内容都投送到接收端。一方面,但屏幕内的一些内容可能并不是用户希望投送的,如屏幕内包含的一些敏感信息。另一方面,镜像投屏对于显示屏幕较小的接收端而言,显示效果不友好。例如,若将手机屏幕镜像投屏至智能手表,由于智能手表的显示屏幕一般较小,会导致用户 无法正常在智能手表上进行投屏内容的查看。因此实现方式1功能单一且灵活性极低,无法满足用户的个性化需求。
对于实现方式2,基于文件的跨终端设备投送仅能对文件进行视频编码解码投屏,用户无法对应用界面其他内容进行投屏。因此实现方式2显示内容的固定、功能单一且不灵活,无法满足用户的个性化需求。
对于实现方式3,实际应用中是由终端设备厂商制定好界面系统的模板规则,由第三方应用开发者按照模板规则进行用户界面(User's Interface,UI)控件或视频流的填充。相对实现方式1和2而言,虽然可以选择性地投送发送端界面内的部分内容。但对于用户而言,仍无法控制实际投送的内容,无法满足自身的个性化需求。
综上所述可知,相关的投屏技术普遍存在功能单一、灵活度低且用户无法根据自身实际需求进行投屏(即无法满足用户的个性化投屏需求)的问题。为解决这些问题,本申请实施例中,用户可以根据实际需求选取所需投屏的应用程序。在选取好应用程序的基础上,首先对应用程序进行界面识别和UI控件分析。同时根据各个接收端实际的设备情况来进行UI控件匹配,确定出各个接收端适宜投送的UI控件。再将这些确定投送的UI控件和应用程序的音频、视频以及待通知数据等数据进行排版、渲染和合成。最后由接收端对合成的内容进行显示。由此可知,在本申请实施例中用户可以根据自己实际所需,自行选取投屏的内容。并可以根据实际接收端的情况来进行自适应的对投屏UI控件选取、布局和最终的投屏。使得本申请实施例的投屏功能更加丰富,灵活性高,可以很好地满足用户的个性化投屏需求。
本申请实施例中的发送端和接收端,均可以是手机、平板电脑和可穿戴设备等终端设备。具体发送端和接收端的终端设备类型此处不予限定,可根据实际的场景确定。例如,当实际场景中是由手机向智能手表和电视进行投屏时,此时手机就是发送端,智能手表和电视均为接收端。
下文以发送端是手机为例,图1A示出了手机100的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。其中传感器模块180可以包括陀螺仪传感器180A,加速度传感器180B,气压传感器180C,磁传感器180D,环境光传感器180E,接近光传感器180G、指纹传感器180H,温度传感器180J,触摸传感器180K(当然,手机100还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、骨传导传感器等,图中未示出)。
可以理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit, GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
处理器110可以运行本申请实施例提供的投屏方法,以便于丰富投屏功能,提升投屏的灵活度,提升用户的体验。处理器110可以包括不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的投屏方法,比如投屏方法中部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。显示屏194可用于显示由用户输入的信息或提供给用户的信息以及各种图形用户界面(graphical user interface,GUI)。例如,显示器194可以显示照片、视频、网页、或者文件等。再例如,显示器194可以显示图形用户界面。其中图形用户界面上包括状态栏、可隐藏的导航栏、时间和天气小组件(widget)、以及应用的图标,例如浏览器图标等。状态栏中包括运营商名称(例如中国移动)、移动网络(例如4G)、时间和剩余电量。导航栏中包括后退(back)键图标、主屏幕(home)键图标和前进键图标。此外,可以理解的是,在一些实施例中,状态栏中还可以包括蓝牙图标、Wi-Fi图标、外接设备图标等。还可以理解的是,在另一些实施例中,图形用户界面中还可以包括Dock栏,Dock栏中可以包括常用的应用图标等。当处理器检测到用户的手指(或触控笔等)针对某一应用图标的触摸事件后,响应于该触摸事件,打开与该应用图标对应的应用的用户界面,并在显示器194上显示该应用的用户界面。
在本申请实施例中,显示屏194可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接显示屏。当处理器110运行本申请实施例提供的投屏方法后,处理器110可以控制外接的音频输出设备切换输出的音频信号。
摄像头193(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据 所述光信号生成待拍摄物体的原始图像。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储手机100使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。
内部存储器121还可以存储本申请实施例提供的投屏方法对应的一个或多个计算机程序1310。该一个或多个计算机程序1304被存储在上述存储器211中并被配置为被该一个或多个处理器110执行,该一个或多个计算机程序1310包括指令,该计算机程序1310可以包括帐号验证模块2111、优先级比较模块2112。其中,帐号验证模块2111,用于对局域网内的其它终端设备的系统认证帐号进行认证;优先级比较模块2112,可用于比较音频输出请求业务的优先级和音频输出设备当前输出业务的优先级。状态同步模块2113,可用于将终端设备当前接入的音频输出设备的设备状态同步至其它终端设备,或者将其它设备当前接入的音频输出设备的设备状态同步至本地。当内部存储器121中存储的投屏方法的代码被处理器110运行时,处理器110可以控制发送端进行投屏数据处理。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的投屏方法的代码还可以存储在外部存储器中。这种情况下,处理器110可以通过外部存储器接口120运行存储在外部存储器中的投屏方法的代码,处理器110可以控制发送端进行投屏数据处理。
下面介绍传感器模块180的功能。
陀螺仪传感器180A,可以用于确定手机100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180A确定手机100围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器180A可以用于检测手机100当前的运动状态,比如抖动还是静止。
当本申请实施例中的显示屏为可折叠屏时,陀螺仪传感器180A可用于检测作用于显示屏194上的折叠或者展开操作。陀螺仪传感器180A可以将检测到的折叠操作或者展开操作作为事件上报给处理器110,以确定显示屏194的折叠状态或展开状态。
加速度传感器180B可检测手机100在各个方向上(一般为三轴)加速度的大小。即陀螺仪传感器180A可以用于检测手机100当前的运动状态,比如抖动还是静止。当本申请实施例中的显示屏为可折叠屏时,加速度传感器180B可用于检测作用于显示屏194上的折叠或者展开操作。加速度传感器180B可以将检测到的折叠操作或者展开操作作为事件上报给处理器110,以确定显示屏194的折叠状态或展开状态。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机通过发光二极管向外发射红外光。手机使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机附近有物体。当检测到不充分的反射光时,手机可以确定手机附近没有物体。当本 申请实施例中的显示屏为可折叠屏时,接近光传感器180G可以设置在可折叠的显示屏194的第一屏上,接近光传感器180G可根据红外信号的光程差来检测第一屏与第二屏的折叠角度或者展开角度的大小。
陀螺仪传感器180A(或加速度传感器180B)可以将检测到的运动状态信息(比如角速度)发送给处理器110。处理器110基于运动状态信息确定当前是手持状态还是脚架状态(比如,角速度不为0时,说明手机100处于手持状态)。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
示例性的,手机100的显示屏194显示主界面,主界面中包括多个应用(比如相机应用、微信应用等)的图标。用户通过触摸传感器180K点击主界面中相机应用的图标,触发处理器110启动相机应用,打开摄像头193。显示屏194显示相机应用的界面,例如取景界面。
手机100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。在本申请实施例中,移动通信模块150还可以用于与其它终端设备进行信息交互,即向其它终端设备发送投屏相关数据,或者移动通信模块150可用于接收投屏请求,并将接收的投屏请求封装成指定格式的消息。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中, 调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。本申请实施例中,无线通信模块160,用于与接收端建立连接,通过接收端显示投屏内容。或者无线通信模块160可以用于接入接入点设备,向其它终端设备发送投屏请求对应的消息,或者接收来自其它终端设备发送的音频输出请求对应的消息。
另外,手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机100可以接收按键190输入,产生与手机100的用户设置以及功能控制有关的键信号输入。手机100可以利用马达191产生振动提示(比如来电振动提示)。手机100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。手机100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
应理解,在实际应用中,手机100可以包括比图1A所示的更多或更少的部件,本申请实施例不作限定。图示手机100仅是一个范例,并且手机100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
终端设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明终端设备的软件结构。图1B是本发明实施例的终端设备的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图1B所示,应用程序包可以包括电话、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图1B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状 态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供终端设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.164,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
图2A示出了本申请实施例一提供的投屏方法的实现流程图,详述如下:
S101,发送端接收投屏指令,获取待投屏应用程序。
在接收到投屏指令时,发送端会启动投屏功能,并执行S101的操作。其中,投屏指令可以是用户或第三方设备输入的,也可以是由发送端自身主动生成的。具体需由 实际应用场景确定。
在本申请实施例中,以应用程序(以下简称应用)为对象进行投屏。因此,在开始投屏之前,发送端首先要确认出本次需要进行投屏的应用。在本申请实施例中,可以参考以下几种方式进行待投屏应用的确认:
1、发送端在启动发送端的投屏功能后,由用户根据实际需求,在发送端选取所需投屏的一个或多个应用。发送端根据用户输入的选取指令,来确定出具体所需投屏的应用,以满足用户的个性化投屏需求。例如,可以参考图2B。假设发送端为手机,且手机在启动投屏功能后,会打开“投屏应用选择”界面。用户可以在“投屏应用选择”界面自由选择一个或多个待投屏的应用并点击确认。例如可以仅选择“地图”应用进行投屏,也可以同时选择“音乐播放器”和“地图”两个应用同时进行投屏。
2、由技术人员或用户预先设置好默认投屏的一个或多个应用,发送端在启动发送端的投屏功能后,将默认投屏的应用设置为此次投屏的应用。在满足用户个性化投屏需求的基础上,用户无需每次都进行应用选取,使得操作更加便捷。例如,亦可以参考图2B,假设发送端为手机。此时用户可以在投屏功能启动前手动进入“投屏应用选择”界面,并进行默认应用选取。在此基础上,下一次启动投屏功能时,即可根据用户设置的默认应用进行待投屏应用选取。
3、在上述方式2设置默认投屏应用的基础上,用户亦可以在发送端启动投屏功能后,再次进入“投屏应用选择”界面修改当次投屏的应用。在方式2的基础上,方式3使得用户可以更为灵活地控制每一次投屏的应用情况。
4、发送端按照一定的预设规则,自行选取一个或多个待投屏应用。在方式4中,技术人员可以预先设置一些应用选取规则,如可以设置为将运行中且支持投屏的应用均选取为待投屏应用。在启动投屏功能后,根据该预设规则来进行待投屏应用的自动选取。
实际应用中,可以参考上述4种方式中的任意一种进行待投屏应用的确认。亦可以由技术人员根据实际应用场景需求,设置其他的待投屏应用确认方法。此处不对待投屏应用的确认方法做过多限定。
同时,本申请实施例不对投屏功能的开发方式进行限定,可由技术人员根据实际情况进行操作。
例如在一些可选实施例中,投屏功能可以作为一个系统功能内置于发送端的操作系统之中。而在另一些可选实施例中,考虑到实际情况中有一些终端设备由于技术和成本等因素限制,难以进行投屏功能内置。如对于一些老型号的手机,系统内置投屏功能的成本过大。因此为了满足无法投屏功能内置的终端设备的投屏需求,在这些可选实施例中,可以将投屏功能设置于一个投屏应用程序之中。通过将该投屏应用程序安装至发送端,即可使得发送端具有本申请实施例中的投屏功能。
作为本申请的一个可选实施例,在发送端已具有投屏功能的基础上,对投屏功能的启动方式可分为发送端被动启动和主动启动两种,说明如下:
被动启动:是指发送端被用户或第三方设备输入投屏指令,触发启动投屏功能。例如,用户手动启动发送端的投屏功能,或者第三方设备通过向发送端发送启动指令的方式,触发发送端的投屏功能。
主动启动:是指发送端自身主动生成投屏指令,启动投屏功能。例如,用户可以在发送端设置一个自动开启投屏功能的时间点,如每天晚上8点。在到达该时间点时,由发送端主动生成投屏指令并开启投屏功能。
实际应用中,用户可以根据实际需求来选取上述任意一种方式来启动发送端的投屏功能。
S102,发送端对待投屏应用程序的实时界面进行识别,判断是否为预设界面。
随着应用功能的不断丰富,一个应用具有的界面数量可能会非常多。例如一些音乐播放器中,不光有音乐播放界面和音乐列表界面,还同时具有歌曲信息浏览界面、歌手信息浏览界面、评论浏览界面、广告界面和网页浏览界面。实际应用中发现,过多的应用界面投屏会影响用户的正常使用,降低用户体验。如投屏一些广告界面,可能会影响用户对投屏应用内容的正常查看。
为了避免投屏时过多界面显示对用户正常使用的影响,在本申请实施例中,首先会将应用中的界面划分为基础界面(即预设界面)和非基础界面两类。其中,基础界面主要是指应用中的一些核心界面,包括一些包含基础功能或关键功能的界面。例如音乐播放器中的播放界面,地图中的导航界面。还可以包括一些对用户体验影响较大的界面,如歌曲信息界面。具体可由技术人员根据实际需求进行划分。
同时,对基础界面的划分,既可以是以单个应用为单位进行划分,此时每个应用都有着明确对应的基础界面和非基础界面。也可以是以应用类型进行划分,例如对所有的音乐播放器统一设置为基础界面包括:音乐播放界面和音乐列表界面。此时需要先确定出应用的具体类型,才能得知其对应的基础界面和非基础界面。具体可由技术人员根据实际应用的情况以及用户的需求来进行选取和设定,此处不做过多的限定。
在划分出各类应用对应的基础界面和非基础界面的基础上,在进行投屏操作时,本申请实施例会对基础界面进行UI控件分析和匹配等操作,以保障对基础界面投屏的效果。因此,在S101确定出所需投屏的应用后,本申请实施例会对待投屏应用进行实时界面识别,判断实时界面是否为该待投屏应用的基础界面。其中,具体的基础界面识别方法此处不予限定,可由技术人员根据实际需求自行设定。
作为对识别实时界面是否为基础界面的一种可选实现方式。对于发送端无法获取到包名(Package Name,包名是指应用程序安装文件的名称,是应用程序的唯一标识)的待投屏应用,此时无法直接确认出待投屏应用对应的基础界面。为了识别此类待投屏应用的实时界面是否为基础界面,本实施例中会预先训练好可通过显示界面识别应用类型的识别模型。再利用该识别模型对实时界面进行识别,以获取待投屏应用的应用类型。并根据应用类型查找出待投屏应用对应的基础界面。最后识别实时界面是否为待投屏应用对应的基础界面即可。例如,假设待投屏应用的实时界面为音乐播放界面,经识别模型处理后,可以识别出待投屏应用为音乐播放器类型的应用。再查找出音乐播放器类型的应用对应的基础界面为音乐播放界面和音乐列表界面。最后识别实时界面是否属于音乐播放界面或音乐列表界面。若属于这说明实时界面是待投屏应用的基础界面。
其中,本申请实施例中不对具体识别模型的类型和训练方法进行过多限定,可由技术人员根据实际需求选取或设计。作为本申请的一个可选实施例,可以选取残差网 络模型作为本申请实施例中的识别模型。训练的方法可以设置为:预先采集多张界面样本图像,并对每张界面样本图像标注对应的应用类型,例如音乐播放界面对应于音乐播放器类型。在此基础上进行界面样本图像进行模型训练,从而得到可以根据界面图像识别应用类型的识别模型。
作为对识别实时界面是否为基础界面的另一种可选实现方式。对于发送端可以获取到包名的待投屏应用。一方面,亦可以采用上述对无法获取到包名的待投屏应用相同的方式来进行识别。另一方面,也可以根据包名查找出待投屏应用对应的基础界面(若预先设置的是以应用类型进行划分,对各个应用类型设置对应的基础界面,则根据包名确认出待投屏应用的应用类型,再查找出对应的基础界面)。再识别实时界面是否为对应的基础界面。例如假设对于所有的音乐播放器而言,设置对应的基础界面为音乐播放界面和音乐列表界面,其余界面均为非基础界面。此时若获取到待投屏应用A的包名为“xx音乐播放器”,即可确认出待投屏应用A为音乐播放器,其对应的基础界面为音乐播放界面和音乐列表界面。再识别实时界面是否为音乐播放界面或音乐列表界面。若是其中之一,则判定实时界面为基础界面;若均不是,则判定实时界面为非基础界面。
其中,应用说明地,对于单个运行中的应用而言,其既有可能是后台运行也有可能是前台运行。对于后台运行的应用而言,其实时界面并不会真实显示在发送端屏幕中,而是会在后台虚拟显示。因此对于后台运行的待投屏应用而言,可以通过获取待投屏应用后台虚拟显示的界面,来实现对待投屏应用实时界面的获取。
S103,若实时界面为预设界面,发送端对实时界面进行UI控件提取,得到实时界面内包含的UI控件。
由于单个界面中一般会同时包含较多内容,在投屏时若直接将所有的内容投送到接收端,难以保证接收端中的显示效果。例如,当发送端屏幕大于接收端屏幕时,如手机投屏到智能手表,投屏实时界面所有内容可能会导致接收端无法正常查看显示内容。因此,为了保障最终在接收端的显示效果,提高投屏的灵活性。本申请实施例会对实时界面进行UI控件提取,并进行UI控件的筛选投屏。其中,对实时界面内UI控件及属性数据提取方法此处不予限定,可由技术人员根据实际需求选取或设定。
作为本申请中进行UI控件提取的一种可选实施方式,提取的操作如下:
获取实时界面对应的视图节点(ViewNode)数据。其中视图节点数据中记录着应用UI控件的部分属性数据。例如UI控件对应的绘制(Skia)指令、UI控件的尺寸、UI控件的坐标以及UI控件之间父类子类的树结构关系等。
利用预先训练好的控件识别模型对实时界面进行UI控件识别,并识别UI控件对应的控件类型。
从视图节点数据中剔除未识别出的UI控件的属性数据。
在本申请实施例中,通过控件识别模型对实时界面进行UI控件识别,并同步从视图节点数据中剔除未识别出的UI控件对应的属性数据。至此可以获取到实时界面内包含的UI控件以及UI控件的属性数据(包括视图节点内的属性数据和识别出的控件类型),为后续对UI控件的匹配和再排版等提供了基础数据。其中,控件识别模型的模型种类和训练方法此处不予限度,可由技术人员根据实际需求进行选取或设定。
作为本申请中控件识别模型选取和训练的几种可选方式,包括:
1、可选用R-CNN模型、Fast-R-CNN模型或者YOLO模型来作为控件识别模型。训练的方法为:以多张包含UI控件的应用界面图像作为样本数据,对控件识别模型进行UI控件区域识别的训练,直至满足预设收敛条件,完成模型训练。
2、可以选用残差网络模型作为控件识别模型。训练方法为:预先对应用界面图像进行UI控件的图像裁剪,并标记各个UI控件图像对应的控件类型,将得到的UI控件图像作为样本数据。基于样本数据对控件识别模型进行UI控件图像识别的训练,直至满足预设收敛条件,完成模型训练。
3、可以选用残差网络模型作为控件识别模型。考虑到在上述训练方式2中,样本数据是对应用界面图像进行图像裁剪的到的UI控件图像。受到父节点的背景干扰,样本数据中往往会存在一定的噪声数据,例如参考图2C中的(a)部分。这样对最终控件识别模型的识别效果会造成一定的影响。
为了提高控件识别模型的识别效果,在方式3种,训练方法为:预先基于发送端系统内提供的接口,利用绘制指令对UI控件进行独立的图像绘制。此时可以得到无背景噪声的UI控件图像,例如可以参考图2C中的(b)部分。同时标记各个UI控件图像对应的控件类型,并将得到的各个UI控件对应的图像作为样本数据。基于样本数据对控件识别模型进行UI控件图像识别的训练,直至满足预设收敛条件,完成模型训练。其中,为了进一步提升控件识别模型的识别效果,在进行模型训练之前,还可以包括:对绘制出的UI控件图像取最小有效像素范围。其中UI控件图像取最小有效像素范围,是指剔除UI控件外接矩形框以外的像素点,以实现对UI控件图像周围留白的消除,例如参考图2D,其中的(b)部分就是对(a)部分进行取最小有效像素范围处理后得到的UI控件图像。
在利用方式3训练得到的控件识别模型进行实时界面的UI控件识别时,可以先利用发送端系统内提供的接口绘制出实时界面内各个UI控件的图像。再将绘制出的UI控件图像作为输入数据进行模型处理识别。其中,亦可以在输入控件识别模型之前,对UI控件图像进行上述取最小有效像素范围的处理,以增强识别的准确性。
本申请实施例不对具体使用的控件识别模型类型以及训练方法进行过多限定。技术人员可以根据实际需求,从上述3种方式中任选一种进行控件识别模型的训练。例如,若想消除UI控件图像中背景噪声的影响,提高控件识别模型的识别效果,可以选取方式3进行控件识别模型的训练。而若无法获实现对UI控件图像的绘制,例如无法取到发送端系统接口的配合。此时则可以选取方式1或者方式2进行处理。同时,亦可以由技术人员自行选取或设计其他的方式训练得到所需的控件识别模型。
作为本申请的一个可选实施例,考虑到实际情况中当投屏功能启动时,待投屏应用的实时界面也可能不是基础界面。为了在实时界面为非基础界面的情况下,保障投屏的正常进行,以保障用户的人机交互操作和体验。在本申请实施例中,可以选择以下几种方式中的任意一种进行处理:
1、向接收端投送一个空白背景的界面,并在界面中提示用户切换应用界面。
2、在未识别出基础界面之前,采用屏幕镜像投屏的方式操作。
在用户未切换到基础界面之前,可以选用上述任意一种方式进行显示处理。而在 用户切换到基础界面,本申请实施例S102检测到实时界面为基础界面之后,本申请实施例会继续执行S103的操作。
S1041,发送端获取待投屏应用的音频流和视频流。获取每个接收端的设备信息。基于设备信息,从UI控件、音频流和视频流中匹配出每个接收端对应的n个用户体验设计元素。其中,n为自然数。
在本申请实施例中,接收端的数量可以是1也可以是大于1的整数。当数量为1时,可实现发送端到接收端的一对一投屏。如将手机内容投屏到电视。当数量大于1时,可以实现发送端到接收端的一对多投屏。其中,接收端可以是包含独立系统或者非独立系统的设备及屏幕。例如如未来驾驶舱场景下,周边的终端设备可能包括抬头显示仪、液晶仪表盘、车载中控屏等,这些均可以作为本申请实施例中的接收端。又例如,当发送端为手机时,接收端可同时包括:平板电脑、个人电脑、电视、车机、智能手表、耳机、音响、虚拟现实设备和增强现实设备。或者根据实际需求,可以添加更多其他设备。
用户体验设计(User experience design,UX)元素(又称人机交互元素),是指终端设备中可以对用户体验造成影响的数据元素。例如以下几种常见的UX元素:
1、应用界面中包含的UI控件。
2、音频流,是指媒体类型的音频流,如音乐播放器播放的音乐。
3、视频流,可以是镜像视频数据,也可以是特定图层中的视频数据。
4、待通知数据,是指终端设备上正在使用中的服务数据。例如正在播放歌曲的信息(歌曲名、表演者和专辑图片等),正在通话过程中的信息(联系人和电话号码等),以及一些应用的推送数据等。例如在手机中,待通知数据一般会在手机状态栏内进行推送显示,或者以弹窗的形式进行推送显示。其中,待通知数据可以不依赖于待投屏应用而存在,如可以是发送端系统发出的警告通知。
在发送端中,一个应用可能会包含较多的UX元素。由于每个接收端的硬件配置和与用户的交互方式可能会存在一定的差异。例如智能手表的屏幕远小于电视的屏幕,但智能手表的操作便利度高于电视,即交互复杂度低。这些硬件配置和交互方式等,都会影响最终用户在接收端中对UX元素的查看和交互等操作,进而影响用户体验。因此在本申请实施例中,不会将待投屏应用的所有内容直接投送到接收端中,而是会根据接收端的设备信息来自适应地匹配出适宜的UX元素进行投送。其中,具体的自适应匹配方法此处不做过多限定,可由技术人员根据实际需求进行选取或设定。例如,可以对每个UX元素均从显示尺寸和交互复杂度两个维度出发,根据接收端的设备信息判断是否满足这两个维度的需求,并将满足需求的UX元素均设置为匹配成功的UX元素。其中,根据实际匹配的情况不同,每个接收端最终可能匹配成功的UX元素数量即可能较少,如可能为0;也可能较多,如对所有的UX元素均匹配成功。因此在本申请实施例中,n的值需由实际匹配情况确定。
在本申请实施例中,第一待投屏数据和第二待投屏数据均是指UX元素。其中,第二待投屏数据,是指提取出的所有UX元素。第一待投屏数据,是指对接收端进行匹配后筛选出的UX元素。
作为本申请的一个可选实施例,对UX元素的自适应匹配操作包括:
将视频流和实时界面内包含的UI控件划分为一个或多个元素集,其中每个元素集均包含视频流或者至少一个UI控件,且各个元素集中不存在交集。
对各个元素集进行用户体验分数计算,得到对应的集合体验分。对接收端的设备信息进行用户体验分数计算,得到对应的设备体验分。并对集合体验服和设备体验分进行匹配,确定出接收端对应的m个元素集。其中,m为自然数。
对音频流进行用户体验分数计算,得到对应的音频体验分。对音频体验服和设备体验分进行匹配。若匹配成功,则判定音频流为待投屏应用对应的用户体验设计元素。
在本申请实施例中,用户体验分数是对UX元素或者终端设备对用户体验影响的量化分数值。其中,UX元素的用户体验分数,亦可称为UX元素的数据交互分数,具体的用户体验分数计算方法此处不予限定,可由技术人员根据实际需求进行选取或设定。例如,可以从视觉效果维度、声音效果维度和交互复杂度共三个维度来进行综合评分。其中视觉效果维度可选用UX元素的尺寸和终端设备的显示屏幕尺寸(或者可用的显示屏幕尺寸)进行评估,如对不同的尺寸分别设置对应的体验分数。声音效果维度,可选用音频流质量和终端设备的音质进行评估,如对不同的质量和音质分别设置对应的体验分数。其中终端设备的音质,可通过终端设备音频输出的失真度和频率响应来进行表征。交互复杂度,亦可以从UX元素的尺寸和终端设备的显示屏幕尺寸进行评估。最后将各个维度的体验分数求和,即可得到UX元素或终端设备对应的用户体验分数。对于UI控件的用户体验分数计算时,可以根据UI控件的属性数据来进行处理计算。其中,对于包含多个UX元素的元素集而言,可将整个元素集作为一个整体来计算用户体验分数。例如上述实例中的视觉效果维度,可将元素集内各个UX元素尺寸对应的体验分数求和,得到元素集对应的体验分数。
根据是否需要在接收端进行显示,可以将UX元素分为两类。一类是视频流和UI控件这种需要显示的UX元素。另一类是音频流这种无需显示的UX元素。在接收端显示屏幕配置不同的情况下,这些需要显示的UX元素会影响用户的视觉效果和人机交互效果,进而对用户体验造成影响。因此,在本申请实施例中,会将需要显示的UX元素划分为一个或多个元素集。并以每个元素集为最小的投屏单元来进行接收端的匹配和投屏。其中,每个元素集中具体包含的UX元素数量此处不予限定,可由技术人员根据实际需求设定,只需满足不是空集即可。例如当每个元素集中包含的UX元素数均为1时,即是以单个UX元素为单位的投屏。当每个元素集中均包含一类UX元素时,则可以实现以单类UX元素为单位的投屏。例如可以将实时界面内包含的UX元素划分为元素集A和元素集B。其中元素集A内均为控制类的UX元素,如视频播放器中的播放控件、快进控件和视频切换控件等。元素集B内均为非控制类的UX元素,如视频播放控件、视频流和广告控件。而当将实时界面所有UX元素划分至一个元素集时,则可以实现以整个实时界面为单位的投屏。即每次投屏都是实时界面所有UI控件和视频流的完整投屏。在确定出接收端对应的m个元素集之后,该m个元素集中包含的所有UX元素,均为与接收端匹配的UX元素。其中,m数值由实际场景确定。
对于无需显示的音频流,可以单独评估对应的用户体验分数,以判断接收端是否适合进行音频流输出。其中用户体验分数的计算方法,可以参考上述对需要显示的UX 元素的用户体验分数计算方法说明,此处不予赘述。
考虑到实际场景中,除了视频流、音频流和UI控件以外,还可能会有一些其他需要推送的UX元素。例如上述的待通知数据,以及一些体感数据,如震动数据和电刺激数据。为了实现对这些UX元素的兼容,上述S1041也可以被替换为下列S1042、S1043和S1044中任一操作:
S1042,获取发送端的待通知数据,以及待投屏应用的音频流和视频流。获取每个接收端的设备信息。基于设备信息,从UI控件、音频流、视频流和待通知数据中匹配出接收端对应的n个用户体验设计元素。其中,n为自然数。若n个用户体验设计元素中包含UI控件,则执行S105。
S1043,获取待投屏应用的音频流、视频流和体感数据。获取每个接收端的设备信息。基于设备信息,从UI控件、音频流、视频流和体感数据中匹配出接收端对应的n个用户体验设计元素。其中,n为自然数。若n个用户体验设计元素中包含UI控件,则执行S105。
S1044,获取发送端的待通知数据,以及待投屏应用的音频流、视频流和体感数据。获取每个接收端的设备信息。基于设备信息,从UI控件、音频流、视频流、待通知数据和体感数据中匹配出接收端对应的n个用户体验设计元素。其中,n为自然数。若n个用户体验设计元素中包含UI控件,则执行S105。
其中,对比S1041、S1042、S1043和S1044可知,其区别在于提取的UX元素内容存在一定差异。但S1042、S1043和S1044的原理与S1041基本相同,因此相关的原理和操作说明可参考上述对S1041的说明,此处不予赘述。
应当说明的,待通知数据属于需要显示的UX元素,而体感数据属于无需显示的UX元素。因此若采用上述用户体验分数计算的方法来进行UX元素的自适应匹配操作。对于待通知数据,需与UI控件和视频流一同划分至一个或多个元素集中,并进行用户体验分数的计算和匹配。而对于体感数据,则可以不划分至元素集中,而是像音频流一样单独计算对于的用户体验分数。由于有许多终端设备并不支持体感输出。因此对于体感数据而言在计算用户体验分数时,可将是否支持对应的体感输出作为用户体验分数的一个独立参考维度。
作为本申请中对接收端进行UX元素匹配的另一种可选实现方式,也可以由技术人员或第三方合作商预先设置好一些UX元素和接收端的匹配规则。例如,可以设置为智能手表仅投送控制类的UI元素,电视仅投送视频流和音频流。此时根据设置的匹配规则即可实现对各个接收端的UX元素自适应匹配。又例如,可以从UX元素的尺寸、视觉要求、交互复杂度和使用频率等维度,进行一一匹配,并根据各个维度的匹配结果确认最终UX元素与接收端的匹配结果。
作为本申请中对接收端进行UX元素匹配的又一种可选实现方式。可以预先训练一个神经网络模型,用以自适应匹配终端设备适宜的UX元素。此时无需再计算各个UX元素和接收端的用户体验分数,由神经网络模型自行完成匹配即可。
应当特别说明地,对于有多个待投屏应用的情况。上述对实时界面的S102、S103和S1041等操作,对于每个待投屏应用而言都是相互独立互不干扰的。同时,本申请实施例未对单个UX元素可投屏的接收端数量进行限定。因此在本申请实施例中对于 单个UX元素而言,既有可能出现与所有接收端均匹配失败,从而导致最终不被投屏至接收端的情况。也有可能出现与一个或多个接收端匹配成功,最终被投屏到一个或多个不同的接收端的情况。另外根据设置的自适应匹配规则的不同,单个接收端对应投送的UX元素数量也可以存在一定的差异。例如在上述计算用户体验分数并进行匹配的实例中,若自适应匹配规则设置为仅将集合体验分大于或等于设备体验分中,体验分最高的元素集判定为匹配成功。此时每个接收端仅会对应一个元素集中的UX元素(假设音频流匹配失败)。而所自适应匹配规则设置为将集合体验分大于或等于设备体验分的所有元素集均判定为匹配成功。此时则每个接收端可能会对应有多个元素集。
另外,为了实现对接收端设备信息的获取,以及向接收端进行投屏,在S1041之前,还需要进行接收端与发送端的连接组网。
在本申请实施例中,不对接收端与发送端的组网方式进行过多限定,可由技术人员根据实际需求进行选取或设定。例如,在一些可选实施例中,可以利用USB等有线连接方式来进行连接组网。而在另一些可选实施例中,也可以通过蓝牙和WiFi等无线连接方式进行组网。
作为本申请的一个可选实施例,可选用无线访问节点(Access Point,AP)组网的方式,将发送端和接收端置于同一AP设备的网络中。发送端和接收端可以通过AP设备相互通信,实现连接组网。
作为本申请的另一个可选实施例,可选用对等网络(Peer-to-Peer Network,P2P)组网的方式,将发送端和接收端中某一设备作为中心设备创建无线网络。中心设备通过蓝牙广播和扫描来发现其他非中心设备,非中心设备接入该无线网络,最终形成一对多组网。
在完成接收端与发送端的连接组网后,发送端可以通过主动或被动的方式,获取接收端的设备信息,以实现对UX元素的自适应匹配。其中,发送端主动获取是指由发送端主动向接收端发起设备信息获取请求,再由接收端返回对应的设备信息。发送端被动获取是指,在完成连接组网后,由接收端主动向发送端发送自身的设备信息。
作为本申请的一个可选实施例,若S1041、S1042、S1043或S1044筛选出的n个用户体验设计元素中不包含UI控件。此时可以将n个用户体验设计元素发送给接收端,由接收端进行输出。此时无需进行S105的控件排版操作。例如,当n个用户体验设计元素中仅有音频流和视频流时,可以将音频流发送给接收端进行播放。此时无需进行UI控件排版布局等操作。同时若筛选出的用户体验设计元素中不包含需要显示的元素时,如仅包含音频流时,接收端可以不进行任何显示。如智能音响仅进行音频流输出而无需任何内容。此时本申请实施例亦不用进行显示界面的相关操作,如无需进行UI控件排版和视频流合成等操作。
作为本申请的另一个可选实施例,若S1041、S1042、S1043或S1044筛选出的n个用户体验设计元素中包含UI控件,则执行S105。
S105,发送端根据接收端的设备信息,对n个用户体验设计元素中的UI控件进行排版,得到对应的控件布局文件。判断n个用户体验设计元素中是否仅包含UI控件。若n个用户体验设计元素中仅包含UI控件,则执行S106。如n个用户体验设计元素中除了UI控件以外,还包含其他的元素,则执行S107。
考虑在S1041、S1042、S1043和S1044对UX元素的匹配过程中,可能会舍弃实时界面中的一些UI控件。同时接收端和发送端的屏幕情况以及设备所处的实际场景等均可能会存在一定的差异。例如接收端的屏幕尺寸可能会小于发送端屏幕尺寸,或者接收端屏幕中可用的空间尺寸可能会小于发送端屏幕尺寸。又例如车机实际应用场景为用户车辆之中,对交互的简易度要求相对较高。这些因素都使得UI控件在原本实时界面中的排版布局,难以很好适应接收端的界面显示。为了保障UI控件在接收端中的显示效果,方便用户操作,以保障用户人机交互的需求。在筛选出所需投屏的UI控件之后,本申请实施例会对UI控件进行重新排版布局。其中,本申请实施例不对具体的排版布局方法进行限定,可由技术人员根据实际需求进行选取或者设定。
作为本申请中进行UI控件排版布局的一种可选的具体实现方式,包括:
步骤1、获取实时界面的尺寸信息,以及UI控件在实时界面中的位置信息和尺寸信息。根据实时界面的尺寸信息和UI控件在实时界面中的位置信息和尺寸信息,绘制UI控件在实时界面中对应的分布图。
步骤2、识别分布图对应的第一排版类型。
步骤3、基于设备信息,确定第一排版类型对应的第二排版类型。
考虑到不同终端设备实际的应用场景可能会存在一定的差异。在这些不同的应用场景之下,用户对界面显示和人机交互的需求也会存在一定的差异。例如对于车机而言,很多应用场景是用户在开车过程中与车机进行人机交互,因此需要车机人机交互的复杂度较低,以保障用户的正常使用。
为了适应不同的接收端对人机交互的实际需求,方便用户查看和操作接收端显示界面。本申请实施例会预先将应用界面中UI控件的排版划分为多种类型(亦可以称为UI控件的排版风格)。其中具体的类型划分规则此处不予限定,可由技术人员根据实际情况选取或设定。例如可以包括上中下结构的排版类型和左中右结构的排版类型。其中,上中下结构中会从上至下将应用界面分为上中下三部分区域。UI控件在应用界面中整体是按照上中下三部分分布的。例如手机桌面中UI控件一般就是上中下结构的排版类型。左中右结构中会从左至右将应用界面分为左中右三部分区域,UI控件在应用界面中整体是按照左中右三部分分布的。例如一些车机的应用界面中,会按照左中右结构进行UI控件布局。而在又一些车机的应用界面中,亦可以按照上中下等结构进行UI控件的布局。
在划分好不同的排版类型的基础上,本申请实施例还会在方便用户人机交互需求的情况下,针对不同的终端设备选取并设置适宜一种或多种排版类型。例如,车机中一般左中下结构的排版类型较为适宜。此时可以将左中下结构的排版类型设置为车机对应的排版结构。并同时设置好这些排版类型中,UI控件在应用界面中的相对位置信息和相对尺寸信息。例如在上中下结构的排版类型中,可以将一部分UI控件设置于上部分区域内,并将相对尺寸设置为上部分区域的一半大小。在此基础上,再针对每种终端设备,设置一个排版类型映射关系。即针对实时界面中可能出现的各种排版类型,设置终端设备对应的UI控件重组后的排版类型。其中,重组后的排版类型,均为上述设置的终端设备适宜的排版类型。
在设置好了排版关系的基础上,本申请实施例首先会基于实时界面的尺寸信息, 以及UI控件在实时界面中的位置信息和尺寸信息,绘制出UI控件在实时界面中的分布图。由于分布图中记录了UI控件在实时界面中的尺寸和位置,因此通过对分布图进行识别,即可确定出实时界面中UI控件对应的实际排版类型(即第一排版类型)。再根据接收端的设备信息确定出接收端的对应的排版类型映射关系,并根据该排版映射关系进行查询,即可确定出UI控件对应的目标排版类型(即第二排版类型)。
作为本申请的一个可选实施例。考虑到实时界面的尺寸可能较大,如一些手机的分辨率可能达到了2240×1080,这使得对UI控件分布图的处理运算量较大。为了减小运算量,提高终端设备的性能。本申请实施例在在步骤2之前,还可以对分布图进行尺寸比例不变的缩小。例如可以将尺寸为2240×1080的分布图缩小至56×27。再基于缩小后的分布图进行步骤2的操作。
步骤4、基于第二排版类型,获取各个UI控件在接收端显示界面中对应的相对位置信息和相对尺寸信息。并基于实时界面的视图节点数据,以及各个UI控件相对位置信息和相对尺寸信息,生成对应的控件布局文件。
在确定出目标排版类型之后,即可确定出各个UI控件在排版重组后的界面中的相对位置信息和相对尺寸信息。同时,本申请实施例还会进一步地通过实时界面的视图节点数据,来获取各个UI控件更多属性数据。例如UI控件的旋转角度和是否可操作等属性数据。再将UI控件的相对位置信息、相对尺寸信息和新获取到的属性数据,均打包至对应的控件布局文件。由于控件布局文件中已经存储了UI控件排版所需的各项数据,因此后续接收端可以根据控件布局文件即可实现对UI控件的布局重组,绘制出排版重组后的UI控件。
其中,考虑到目标排版类型中仅仅只是记录了UI控件在应用界面中的整体相对位置。例如对于左中右结构的排版类型而言,仅仅只是记录了将UI控件分布在应用界面的左中右三部分区域。实际应用中,还可以进一步确定出实时界面中各个待投屏的UI控件在排版重组后的界面的更为精确的相对位置和相对尺寸。
作为本申请的一个可选实施例,为了实现对UI控件在排版重组后的界面的实际相对位置和相对尺寸的确定。在本申请实施例中,会预先设置一个排版类型之间的位置转换规则。例如可以设置为:在对上中下结构的排版类型和左中右结构的排版类型进行映射时,将上部分区域内所有的UI控件映射至左部分区域内,将中部分区域内的所有UI控件映射至中部分内,并将下部分区域内所有的UI控件映射至右部分区域内。同时还可以预先设置一个UI控件在应用界面各部分区域内的填充方式,以确定出各个UI控件准确的相对位置信息和相对尺寸信息。例如对于单个区域内,若映射后仅存在一个UI控件,可以将该UI控件填充至整个区域空间。此时该区域在应用界面内的相对位置和相对尺寸,即为该UI控件的相对位置和相对尺寸。
作为本申请中进行UI控件排版布局的另一种可选的具体实现方式,也可以由技术人员或者第三方合作商,预先对待投屏应用在各种接收端中的排版布局方式进行设定。即预先设置好在UI控件在接收端中对应的控件布局文件。此时只需要直接根据预设好的控件布局文件进行UI控件重组布局,即可绘制出排版重组后的UI控件。
其中应当特别说明地,若有多个待投屏应用均有需要投屏至同一接收端的UI控件。则S105在进行UI控件排版布局时,对每个待投屏应用的UI控件布局亦可以独立进 行。在接收端进行排版重组后界面生成时,根据各个待投屏应用对应的控件布局文件在屏幕对应的空间区域内生成即可。例如可以参考图2E所示实例,假设有社交应用、音乐播放器、导航地图和电话应用共4个应用需要投屏至图2E的接收端之中。此时需要在接收端屏幕中确定出各个应用可用的空间区域。各个应用的UI控件排版布局操作可以独立进行。而对排版重组后的界面的绘制,只需在应用对应的空间区域内绘制即可。其中,本申请实施例不对多应用投屏至同一接收端情况下,各个应用在接收端屏幕的布局方式进行过多限定。可由技术人员根据实际需求进行选取或者设定。例如,在一些可选实施例中,可以将接收端屏幕空间区域均分给各个应用。而在另一些可选实施例中,亦可以预先对各个应用对接收端的优先级进行排序,并对高优先级的应用分配较大的空间区域。例如在车机中,导航地图相对其他应用而言重要程度更高,因此可以优先给导航地图分配较大的空间区域。
S106,发送端获取n个UI控件对应的绘制指令和图层数据,并将n个UI控件的绘制指令、图层数据和控件布局文件发送至接收端。由接收端执行S108的操作。
S107,发送端获取n个用户体验设计元素中UI控件对应的绘制指令和图层数据,并将这些UI控件的绘制指令、图层数据和控件布局文件,以及n个用户体验设计元素中除UI控件以外的元素数据,均发送至接收端。由接收端执行S109的操作。
由于筛选出来的n个UX元素中,既有可能仅包含UI控件,也有可能同时包含着视频流、音频流和待通知数据等UX元素。对于仅包含UI控件的情况,对这些UI控件的投屏,在接收端绘制排版布局后的UI控件即可生成对应的显示界面。而对于还包含其他UX元素的情况,则需要在绘制排版布局后的UI控件的同时,输出其他UX元素。例如播放视频流和音频流,以及显示推送信息等。
为了保障接收端对排版布局后的UI控件的绘制,本申请实施例中会将UI控件的绘制指令、图层数据和控件布局文件一同发送给接收端。在接收端中,由绘制指令根据控件布局文件确定各个UI控件在接收端应用界面中的位置和尺寸,并绘制出各个UI控件的轮廓框架。再根据图层数据绘制出各个UI控件的内容。从而实现了对排版布局后的UI控件的绘制。其中图层数据的获取方式此处不限定,可由技术人员自行选取或设定。例如在安卓系统中,可以通过SurfaceFlinger组件来获取UI控件的图层数据。同时,由于单个UI控件可能对应着多个图层,而为了保障对UI控件的准确绘制,多个图层之间需要进行对齐。因此在单个UI控件对应着多个图层时,本申请实施例在获取这些图层数据的同时,还会获取这些图层之间的坐标关系。用时还会将这些坐标关系与图层数据一同发送至接收端,以保障接收端对UI控件的准确绘制。
对于UI控件以外的UX元素,无需绘制指令、图层数据和控件布局文件。因此可以将这些UX元素发送给接收端。由接收端在绘制UI控件的同时,输出这些非UI控件的UX元素。
S108,接收端根据接收到的UI控件的绘制指令、图层数据和控件布局文件,在自身屏幕中绘制出对应的UI控件,得到对应的显示界面。
基于绘制指令和控件布局文件,可以绘制出各个UI控件的轮廓框架。例如可以参考图2F,假设(a)部分是发送端的实时界面,其中包含着标题控件、封面控件、演唱控件、下载控件、评论控件、展开控件、控制类控件(包括播放控件、下一曲控件、 上一曲控件、列表控件)、播放模式控件和进度条控件等UI控件。假设经过UX元素匹配后,仅保留了标题控件、控制类控件、封面控件和进度条控件。在基于绘制指令和控件布局文件进行绘制后,可以得到图2F中的(b)部分。此时各个UI控件内没有内容。
在绘制出各个UI控件的轮廓框架后,根据图层数据绘制出各个UI控件的内容。从而实现了对排版布局后的UI控件的绘制。例如可参考图2G,假设(b)部分为上述参考图2F中的(b)部分。其中(c)部分,就是对(b)部分基于图层数据进行控件内容绘制后得到的UI控件。此时实际已经完成了对接收端投屏界面的渲染,使得接收端可以得到对应的显示界面。
其中,为了满足接收端在不同可使用屏幕空间区域尺寸的情况下,对UI控件的正常绘制显示。在本申请实施例中,可以在基于绘制出显示界面之后对显示界面进行缩放,将显示界面填充至接收端屏幕中的可使用空间区域。其中,接收端屏幕中的可使用空间区域,是指接收端为待投屏应用提供的投屏空间区域。该区域小于或等于接收端屏幕的实际总尺寸,具体大小需由实际应用场景确定。本申请实施例不对具体的显示界面填充方法进行过多限定,可由技术人员根据实际需求选取或设定,例如可以设置为铺满整个可使用空间区域,亦可设置为对显示界面进行比例不变的界面缩放,直至显示界面在可使用空间区域内面积最大。
S109,接收端根据接收到的UI控件的绘制指令、图层数据和控件布局文件,在自身屏幕中绘制出对应的UI控件。并对需要显示的各个用户体验设计元素进行分层叠加显示。得到对应的显示界面。
在存在多种需要显示的UX元素时,本申请实施例会对各个UX元素采用分层叠加显示。参考图2H,本申请实施例中视频流、UI控件和待通知数据的叠加顺序为:UI控件叠加于视频流上层,待通知数据叠加于UI控件上层。其中UI控件和待通知数据对应的图层,背景均为透明色,以保障用户对视频流的正常查看。对于无需显示的UX元素而言,如音频流,直接由接收端进行播放即可。
S109中对UI控件的绘制与S108原理和操作相同,具体可参考S108中的说明此处不予赘述。
由于实际筛选出的UX元素的情况是未知的,其中,对于音频流等无需显示的UX元素而言,接收端进行应用界面绘制时无需考虑。而对于需要显示的UI控件、视频流和待通知数据而言。在已有UI控件的情况下,S109中可能存在4种筛选情况,分别为:
1、仅包含UI控件一种需要显示的UX元素。此时n个UX元素中,除了UI控件其余均为无需显示的元素,如音频流。
2、同时包含UI控件和视频流两种需要显示的UX元素。
3、同时包含UI控件和待通知数据两种需要显示的UX元素。
4、同时包含UI控件、视频流和待通知数据三种需要显示的UX元素。
对于情况1,直接按照S108的方式进行处理即可。对于情况2、3和4的情况,则需要分别绘制出各个UX元素,并按照图2H的规则进行叠加显示。
在S109中,通过分别绘制各个UX元素并进行叠加显示,完成了对接收端投屏界 面的渲染与合成,使得接收端可以得到对应的显示界面。其中应当说明的,由于待通知数据可以不依赖于待投屏应用而存在,且不会影响待投屏应用的界面布局。因此在实际操作过程中,往往会在接收端显示屏幕中单独分出一块区域用于推送待通知数据,以保障待通知数据可以及时有效的展示给用户。
以一实例进行说明,假设需要投屏的是导航地图,导航地图中的实时地图被编码为视频流,且假设此时需要投屏的可显示UX元素包括视频流和指针控件,参考图2I。此时本申请实施例会将指针控件以背景透明的方式,叠加于视频流上层显示。
以安卓系统的发送端为例进行说明,导航地图类的应用在SurfaceFlinger组件中存在两个层。一个是命名以"SurfaceView-"开头的层,对应导航地图的界面。这个层的图像通过视频方式获取。另一个层对应导航地图的UI控件指令选项,这个层的图像通过绘制指令方式获取。
在接收端,UI控件指令和视频分别通过一个SurfaceView(安卓系统里的一种视图)来显示,这两个SurfaceView放在同一个相对布局(RelativeLayout)里。UI控件指令的SurfaceView在上面,并做成背景透明,让位于下层的视频层能够显示。
在本申请实施例中,通过将发送端中的待投屏应用进行UI控件识别拆分,并根据接收端的实际设备情况,自适应地匹配适宜的UX元素。使得待投屏的应用视图可以根据各个接收端的实际需求进行不同的UX元素拆分组合。同时本申请实施例还可以根据接收端的实际情况来对UI控件进行排版重组。相对屏幕镜像投屏而言,本申请实施例支持最小UX元素级别的独立或组合投屏,不用展示发送端实时界面的所有信息。用户可以在接收端设备上实现更为灵活内容共享和交互,提升了用户跨设备使用终端设备服务的体验和效率。相对基于文件的跨设备投送方案而言,本申请实施例的投送的是实时动态数据,且一个界面中的UX元素可以分布于多个不同接收端之中。投屏功能丰富且灵活性更高。相对定制界面系统的车机方案而言,本申请实施例可以根据UX元素的特点和接收端的设备特征,自适应的进行UX元素筛选、布局和投送。无需多方开发者参与设计,也可以支持各种不同的车机类型。及大地减少了开发者的定制成本。同时基于原UI控件的拆分和排版重组,也可以最大化的保留应用品牌特征,例如视觉元素样式。
另外,本申请实施例还可以实现1对多的终端设备投屏操作,且可以兼容不同操作系统和不同系统版本的终端设备。用户可以根据自己的实际需求,自由设置投送的终端设备对象。使得投屏的方式更为丰富灵活,能满足更多实际场景的需求。
对图2A所示实施例的几点说明:
一、对UI控件排版布局,以及生成接收端显示界面的执行主体可以发生一定的变化。
由图2A所示实施例的说明可知,S105是对UI控件的排版布局。S106和S107是对生成显示界面所需数据的传输。S108和S109主要做的是对接收端显示界面的生成。上述关于图2A所示实施例的说明中,发送端作为执行主体完成了S101至S107的操作。但实际应用中发现,对UI控件的排版布局同样也可以由接收端完成。而对接收端显示界面的生成,亦可以由发送端完成。因此,S105、S108和S109操作的执行主体理论上是可以发生一定变化的。其中更变执行主体后可能得到的几套方案说明如下:
方案1:在S105之后,发送端不用将获取到的绘制指令、控件布局文件以及图层数据等发送给接收端。而是由发送端自行执行S108或S109的操作。此时发送端可以将生成的显示界面以视频流的方式传输给接收端。对于接收端而言,直接对获取到的视频流进行解码播放,即可实现对投屏内容的展示。
与图2A所示实施例相比,方案1中接收端省去了UX元素绘制和显示界面生成的操作,接收端要做的仅仅是对视频流进行解码播放。极大地降低了对接收端的软硬件要求。使得本申请实施例中的投屏技术同样可以兼容一些计算能力较弱的终端设备。例如对于一些老型号的电视而言,也可以实现投屏显示的功能。
方案2:在S1041、S1042、S1043和S1044中筛选出n个UX元素之后,发送端就将各个UX元素数据发送给接收端。由接收端完成S105对UI控件排版布局的操作。并由接收端自身完成对UX元素绘制和显示界面生成的操作。
由于实际情况中,发送端的计算资源往往也较为有限。特别是在发送端向多台终端设备进行投屏时,对发送端而言其工作负荷较大,性能影响较为严重。因此方案2中发送端在完成对UX元素和接收端的匹配之后,会将匹配出的UX元素发送至接收端。由接收端负责后续的操作。通过方案2,可以减小发送端的工作量,节约发送端的计算资源。同时,但由于视频编码解码播放的操作,会对视频的清晰度造成一定的影响。因此相对方案1而言,方案2和图2A所示实施例的方案,在接收端最终显示的效果都会有所提升。
方案3:结合上述方案1和方案2,当发送端筛选出单个接收端对应的n个UX元素之后。由发送端根据自身计算资源情况以及该接收端的设备情况,来决定UI控件的排版布局以及对接收端显示界面的生成的操作,由哪一方终端设备完成。例如,可以设置为:当发送端自身计算资源充足且接收端计算资源不足的情况下,选择方案1进行处理。当发送端自身和接收端的计算资源均较为充足的时候,选择图2A所示实施例的方法进行处理。而当发送端自身的计算资源不足但接收端计算资源较为充足的时候,则可以选择方案2进行处理。
其中,对于每个接收端而言,方案的选取过程都是可以相互独立的。
由于方案1和方案2中UI控件的排版布局的操作和对接收端显示界面的生成的操作的原理,与图2A所示实施例相同。因此具体的原理和操作细节说明,可以参考图2A所示实施例的说明,此处不赘述。
在本申请实施例中,通过对UI控件布局和生成显示界面的执行主体进行变换,可以得到更多不同的投屏方案。由于每种方案均可满足一定的应用场景需求,例如方案1降低了对接收端的软硬件要求,可以兼容更多不同类型的接收端。方案2可以节约发送端的计算资源,提高发送端的设备性能,同时保障投屏的清晰度。方案3可以兼容上述方案1和方案2,使得投屏的灵活性极强。因此使得本申请实施例可以实现对各种应用场景的较好适应,丰富了投屏的功能并提高了投屏的灵活性。
二、对接收端显示界面的刷新操作。
实际应用程序中,部分界面内容会存在显示状态变化的情况。例如一些音乐播放器中,封面控件会不停旋转。又例如应用程序中的动态图片,其显示内容也是会发生一定变化的。
为了保障投屏过程中接收端界面内容显示状态变化与发送端同步,在本申请实施例中,会根据显示内容的类型来进行针对性的更新操作。详述如下:
对于UI控件的显示状态变化,其实质是UI控件的属性数据发生了变化。例如UI控件的颜色、尺寸、角度和透明度的变化,都会使得使得UI控件的显示状态存在差异。因此,作为本申请的一个可选实施例,当UI控件的绘制操作是由接收端完成时。可以由发送端在UI控件属性数据发生变化时,将发生变化的属性数据发生至接收端。接收端在接收端属性数据后,修改对应的UI控件状态。同理,当UI控件的绘制操作是由发送端完成的时。可以由发送端在UI控件属性数据发生变化时,根据发生变化的属性数据更新UI控件对应的显示状态,并将绘制更新后的UI控件以视频流等方式发送给接收端。
对于动态图片的显示状态变化。作为本申请的一个可选实施例,当UI控件的绘制操作是由接收端完成时。考虑到动态图片是由多张图片构成的,本申请实施例中,发送端在绘制动态图片时会异步将新的图片数据发生给接收端。接收端在接收到图片数据后,再绘制出对应的图片数据。
以发送端和接收端均为安卓系统设备为例进行说明。由于发送端在绘制图片是会产生DrawBitmap指令。本申请实施例会捕获该条指令,并在捕获下来后生成图片对应的哈希码(作为图片唯一标识符),并且缓存到发送端。然后发送端再将DrawBitmap指令和哈希码一同发送给接收端。
接收端在接收到DrawBitmap指令后,会通过哈希码在自身的缓冲中查找图片资源。若查找到了对应的图片资源,则直接进行绘制,从而实现对动态图片的更新。若没有查找到对应的图片资源,则会异步向发送端发送一个请求。发送端在接收到该请求后,根据哈希码查找出对应的图片资源再发送给接收端。此时接收端再对该图片资源进行缓冲,并进行绘制,即可实现对动态图片的更新。其中,接收端在尚未接收端图片资源的期间,可以保持原动态图片的状态不变,也可以绘制一种预设的图片,以保障动态图片的视觉效果。
另外,除了应用界面内可能发生一定的内容变化外,应用界面本身也可能会被切换。例如用户在操作音乐播放器时,可能会切换音乐播放器的界面,如从歌曲播放界面切换到歌曲列表界面。作为本申请的一个可选实施例,当新的应用界面成为实时界面时,本申请实施例也有可能进行接收端显示界面的刷新。假设原本的实时界面是待投屏应用的基础界面,本申请实施例详述如下:
若新的应用界面不是基础界面。此时本申请实施例不刷新接收端的显示界面。
若新的应用界面是基础界面。此时根据实际需求,可以从以下3种策略中选取任意一种进行处理:1、对新的基础界面进行投屏,覆盖旧的基础界面。2、对新的基础界面进行投屏,但同时保留对旧的基础界面的投屏。3、不刷新接收端的显示界面。
在本申请实施例中,针对发送端实时界面的多种刷新情况,分别设计了对应的刷新方案。使得实际投屏过程中,接收端可以根据发送端实时界面的状态进行投屏内容的实时刷新。使得本申请实施例中投屏内容的实时性更强,灵活性更高。
三、接收端可以反控发送端。
作为本申请的一个可选实施例,为了实现接收端与用户之间的人机交互,提升用 户体验。在图2A所示实施例,以及对图2A所示实施例进行执行主体调整后得到几个方案实现投屏的基础上。还可以获取用户对接收端的操作事件,并根据该操作事件控制接收端的应用。详述如下:
步骤a、接收端检测操作事件的事件类型,以及操作事件在接收端屏幕对应的第一坐标信息。
步骤b、接收端对第一坐标信息进行坐标转换,得到操作事件在接收端的第二坐标信息,并将事件类型和第二坐标信息发送至发送端。
步骤c、发送端根据第二坐标信息和事件类型,执行对应的事件任务。
由于接收端显示界面的布局属于已知数据,同时显示界面在接收端屏幕中的位置也是已知数据。因此通过坐标计算和对比,可以得到发送端屏幕坐标与接收端屏幕坐标的转换关系。在本申请实施例中,会在接收端中存储好该转换关系。在此基础上,接收端在检测到操作事件时,会检测该操作事件的类型。如在检测到触屏、遥控器或键盘的操作事件时,检测使用的类型是点击操作还是拖动操作。同时还会检测该操作事件在接收端屏幕中的坐标信息,并利用存储好的转换关系转换为在发送端屏幕中的坐标信息。最后接收端会将事件类型和在发送端屏幕中的坐标信息,均发生至发送端。
发送端在接收端事件类型和坐标信息之后,会根据该坐标信息定位出此次操作事件对应的UI控件。再根据事件类型来模拟操作事件,执行对应的事件任务。例如假设投屏应用是音乐播放器,坐标信息定位是下一曲控件,事件类型为点击。此时发送端就会执行对应的操作,进行下一曲播放。
其中,当发送端是安卓系统设备时,可以通过安卓系统中的提供的接口InjectInputEvent方式,在发送端模拟坐标信息对应的事件任务。
本申请实施例通过坐标转换和事件模拟,可以实现接收端对发送端的反控。用户在投屏过程中无需接触发送端,也可以实现对发送端的操作。极大地提升了用户与发送端的人机交互效率,同时使得投屏功能更为丰富且灵活性更强。
四、基于接收端反控,可以实现对不同接收端之间的交互。
由对图2A所示实施例的相关说明可知,本申请实施例中一个发送端可以同时投屏多个接收端。结合前述实施例中接收端对发送端的反控操作可知。当接收端A和接收端B投屏了同一应用的内容时,若其中接收端A的显示界面中包含控制类的UI控件。此时可以通过接收端A中的控制类UI控件,来实现对接收端B的控制,从而实现对不同接收端之间的交互操作。
以一实例进行说明,参考图3所示实例,其中发送端为手机,接收端为电视和智能手表。假设在该实例中,视频流及信息类的UI控件被投屏到电视上进行显示。控制类的UI控件被投屏到智能手表中进行显示。在本实例中,用户可以通过操作智能手表中的上一个、下一个和暂停这些控制类控件,来实现对电视中视频的播放操作。这个过程中用户无需对手机或者电视进行操作。人机交互的效率得到了极大地提升。
同理,对于2个以上的接收端投屏了同一应用内容的情况。只需其中有接收端内包含控制类的UI控件,即可实现对其他接收端的交互操作。
本申请实施例中,基于接收端的反控实现了不同接收端之间的交互。用户无需对发送端或者需要控制的接收端进行操作,仅通过对包含控制类的UI控件的接收端进行 操作即可实现对目标接收端的控制。使得用户在实际投屏使用过程中,人机交互的效率得到了极大地提升,提高了用户体验。
五、发送端在进行投屏操作时,各个终端设备的操作可相互独立。
在本申请实施例中,一方面,发送端在投屏的同时用户仍可以正常使用发送端中的各项服务。在使用的服务与投屏的应用没有冲突的情况下,用户投屏期间使用发送端服务不影响投屏。例如当发送端为手机时,用户在开启投屏功能的同时,仍可以操作与投屏相关或无关的应用。如当投屏了音乐播放器时,用户仍可以正常进行打电话上网等操作。另一方面,由于本申请实施例中支持接收端对发送端的反控。由上述反控原理的说明可知,本申请实施例的反控是基于模拟事件任务实现的,并不需要被反控的应用一直保持在发送端的前台。因此,各个接收端也可以对投屏的应用进行独立操作。在接收端之间没有因为反控互相影响的情况下。各个接收端之间的操作相互独立。
以一实例进行说明,参考图4所示实例,其中发送端为手机,接收端为电视和电脑。假设电视和电脑中都是手机投屏过去的应用。基于反控机制,用户可以通过鼠标控制电脑中投屏的应用。而通过遥控器控制电视中的应用。对于该实例而言,用户在正常使用手机中的服务的同时,亦可以正常使用电视和电脑中投屏的应用。因此,用户可以实现多个屏幕同时观看文档和图片等不同的媒体内容。
在已有的投屏技术中,各个接收端仅能被动地根据发送端的界面状态进行内容显示,功能单一且不灵活。因此与已有的投屏技术相比,本申请实施例可以支持多个终端设备同时使用,且可以相互不影响。因此本申请实施例的投屏功能更为丰富且灵活性更强。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。还应理解的是,虽然术语“第一”、“第二”等在文本中在一些本申请实施例中用来描述各种元素,但是这些元素不应该受到这些术语的限制。这些术语只是用来将一个元素与另一元素区分开。例如,第一表格可以被命名为第二表格,并且类似地,第二表格可以被命名为第一表 格,而不背离各种所描述的实施例的范围。第一表格和第二表格都是表格,但是它们不是同一表格。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例提供的投屏方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
图5是本申请一实施例提供的终端设备的结构示意图。如图5所示,该实施例的终端设备5包括:至少一个处理器50(图5中仅示出一个)、存储器51,所述存储器51中存储有可在所述处理器50上运行的计算机程序52。所述处理器50执行所述计算机程序52时实现上述各个投屏方法实施例中的步骤,例如图2A所示的步骤101至109。或者,所述处理器50执行所述计算机程序52时实现上述各装置实施例中各模块/单元的功能。
所述终端设备5可以是手机、桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器50、存储器51。本领域技术人员可以理解,图5仅仅是终端设备5的示例,并不构成对终端设备5的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入发送设备、网络接入设备、总线等。
所称处理器50可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器51在一些实施例中可以是所述终端设备5的内部存储单元,例如终端设备5的硬盘或内存。所述存储器51也可以是所述终端设备5的外部存储设备,例如所述终端设备5上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器51还可以既包括所述终端设备5的内部存储单元也包括外部存储设备。所述存储器51用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器51还可以用于暂时地存储已经发送或者将要发送的数据。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现可实现上述各个方法实施例中的步骤。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使对应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (23)

  1. 一种投屏方法,其特征在于,应用于发送端,包括:
    响应于投屏指令,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息;
    根据所述设备信息对各个所述接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个所述接收端的用户体验分;
    根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据,其中,所述第一待投屏数据包含视频流、音频流和用户界面控件中的至少一种;
    当所述第一待投屏数据中包含所述用户界面控件时,获取对所述用户界面控件的控件布局文件;
    将所述第一待投屏数据和所述控件布局文件发送至对应的所述接收端,所述第一待投屏数据用于所述接收端进行数据输出,所述控件布局文件用于所述接收端生成包含所述用户界面控件的显示界面。
  2. 根据权利要求1所述的投屏方法,其特征在于,所述根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据,包括:
    提取所述实时界面内包含的第二待投屏数据,其中,所述第二待投屏数据包含视频流、音频流和用户界面控件中的至少一种;
    根据所述用户体验分数,从所述第二待投屏数据中筛选出与各个所述接收端对应的所述第一待投屏数据。
  3. 根据权利要求2所述的投屏方法,其特征在于,所述设备信息包括显示屏幕尺寸以及音频输出的失真度和频率响应;
    所述根据所述设备信息对所述接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个所述接收端的用户体验分;根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据,包括:
    根据所述接收端的显示屏幕尺寸以及音频输出的失真度和频率响应,对各个所述接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个所述接收端的用户体验分;
    对各个所述第二待投屏数据进行处理,得到对应的数据交互分数;
    基于所述用户体验分数对各个所述第二待投屏数据的所述数据交互分数进行匹配,得到各个所述接收端对应的所述第一待投屏数据。
  4. 根据权利要求1至3中任意一项所述的投屏方法,其特征在于,所述获取对所述用户界面控件的控件布局文件,包括:
    基于所述设备信息对所述第一待投屏数据中的用户界面控件进行排版处理,得到所述控件布局文件。
  5. 根据权利要求4所述的投屏方法,其特征在于,在所述将所述第一待投屏数据和所述控件布局文件发送至对应的所述接收端之前,还包括:
    获取所述第一待投屏数据中的各个用户界面控件对应的绘制指令和图层数据,所述绘制指令用于使得所述接收端绘制用户界面控件;
    所述将所述第一待投屏数据和所述控件布局文件发送至对应的所述接收端,包括:
    将所述绘制指令、所述图层数据和所述控件布局文件发送至对应的所述接收端,所述绘制指令、所述图层数据和所述控件布局文件用于所述接收端在生成的显示界面中绘制所述用户界面控件。
  6. 根据权利要求1至5种任意一项所述的投屏方法,其特征在于,在所述获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息之前,还包括:
    获取用户输入的选取指令,并根据所述选取指令获取一个或多个所述待投屏应用程序。
  7. 根据权利要求2或3所述的投屏方法,其特征在于,所述提取所述实时界面内包含的第二待投屏数据,包括:
    获取待投屏应用程序的所述实时界面,并识别所述实时界面是否为预设界面;
    若所述实时界面为预设界面,则提取所述实时界面内包含所述第二待投屏数据。
  8. 根据权利要求2或3所述的投屏方法,其特征在于,在所述根据所述用户体验分数,从所述第二待投屏数据中筛选出与各个所述接收端对应的所述第一待投屏数据中,对单个所述接收端的筛选操作,包括:
    将所述第二待投屏数据中的视频流和用户界面控件划分为一个或多个待投屏数据集,其中,每个所述待投屏数据集均不为空集,且各个所述待投屏数据集之间不存在交集;
    基于该接收端的所述设备信息,对各个所述待投屏数据集进行匹配,并将匹配成功的所述待投屏数据集中包含的所述第二待投屏数据,作为该接收端对应的所述第一待投屏数据。
  9. 根据权利要求4或5所述的投屏方法,其特征在于,所述基于所述设备信息对所述第一待投屏数据中的用户界面控件进行排版处理,得到对应的控件布局文件,包括:
    获取所述实时界面的尺寸信息,以及所述第二待投屏数据中用户界面控件在所述实时界面中的位置信息和尺寸信息,并根据所述实时界面的尺寸信息和所述用户界面控件在所述实时界面中的位置信息和尺寸信息,绘制用户界面控件在所述实时界面中对应的分布图;
    识别所述分布图对应的第一排版类型;
    基于所述设备信息,确定所述第一排版类型对应的第二排版类型;
    基于所述第二排版类型,获取所述第二待投屏数据中各个用户界面控件在所述显示界面中对应的相对位置信息和相对尺寸信息,并基于所述相对位置信息和所述相对尺寸信息,生成所述控件布局文件。
  10. 根据权利要求1至9中任意一项所述的投屏方法,其特征在于,还包括:
    接收所述接收端发送的第二坐标信息和事件类型,并根据所述第二坐标信息和所述事件类型,执行对应的事件任务,其中,所述事件类型是由所述接收端在检测到操作事件后,对操作事件进行类型识别得到,所述第二坐标信息,是由所述接收端在获取到所述操作事件在所述接收端屏幕中的第一坐标信息后,对所述第一坐标信息进行坐标转换处理后得到。
  11. 一种投屏方法,其特征在于,应用于发送端,包括:
    响应于投屏指令,获取待投屏应用程序的实时界面以及一个或多个接收端的设备信息;
    根据所述设备信息对各个所述接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个所述接收端的用户体验分;
    根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据,其中,所述第一待投屏数据包含视频流、音频流和用户界面控件中的至少一种;
    当所述第一待投屏数据中包含所述用户界面控件时,获取对所述用户界面控件的控件布局文件;
    基于所述第一待投屏数据和所述控件布局文件生成各个所述接收端对应的显示界面,并对所述显示界面进行视频编码,得到对应的实时视频流;所述显示界面中包含所述用户界面控件;
    将所述实时视频流发送至对应的所述接收端,所述实时视频流用于所述接收端解码和播放。
  12. 根据权利要求11所述的投屏方法,其特征在于,所述根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据,包括:
    提取所述实时界面内包含的第二待投屏数据,其中,所述第二待投屏数据包含视频流、音频流和用户界面控件中的至少一种;
    根据所述用户体验分数,从所述第二待投屏数据中筛选出与各个所述接收端对应的所述第一待投屏数据。
  13. 根据权利要求12所述的投屏方法,其特征在于,所述设备信息包括显示屏幕尺寸以及音频输出的失真度和频率响应;
    所述根据所述设备信息对所述接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个所述接收端的用户体验分;根据所述用户体验分数,从所述实时界面中获取各个所述接收端对应的第一待投屏数据,包括:
    根据所述接收端的显示屏幕尺寸以及音频输出的失真度和频率响应,对各个所述接收端的视觉效果、声音效果和交互复杂度进行评分,得到各个所述接收端的用户体验分;
    对各个所述第二待投屏数据进行处理,得到对应的数据交互分数;
    基于所述用户体验分数对各个所述第二待投屏数据的所述数据交互分数进行匹配,得到各个所述接收端对应的所述第一待投屏数据。
  14. 根据权利要求10至13中任意一项所述的投屏方法,其特征在于,所述获取对所述用户界面控件的控件布局文件,包括:
    基于所述设备信息对所述第一待投屏数据中的用户界面控件进行排版处理,得到所述控件布局文件。
  15. 根据权利要求14所述的投屏方法,其特征在于,在所述基于所述第一待投屏数据和所述控件布局文件生成各个所述接收端对应的显示界面之前,还包括:
    获取所述第一待投屏数据中的各个用户界面控件对应的绘制指令和图层数据,所 述绘制指令用于使得所述接收端绘制用户界面控件;
    所述基于所述第一待投屏数据和所述控件布局文件生成各个所述接收端对应的显示界面,包括:
    根据所述绘制指令、所述图层数据和所述控件布局文件,绘制所述用户界面控件,并基于绘制出的所述用户界面控件生成所述显示界面。
  16. 根据权利要求11至15中任意一项所述的投屏方法,其特征在于,在所述获取待投屏应用程序的实时界面之前,还包括:
    获取用户输入的选取指令,并根据所述选取指令获取一个或多个所述待投屏应用程序。
  17. 根据权利要求12或13所述的投屏方法,其特征在于,所述提取所述实时界面内包含的第二待投屏数据,包括:
    获取待投屏应用程序的所述实时界面,并识别所述实时界面是否为预设界面;
    若所述实时界面为预设界面,提取所述实时界面内包含所述第二待投屏数据。
  18. 根据权利要求12或13任意一项所述的投屏方法,其特征在于,在所述根据所述用户体验分数,从所述第二待投屏数据中筛选出与各个所述接收端对应的所述第一待投屏数据中,对单个所述接收端的筛选操作,包括:
    将所述第二待投屏数据中的视频流和用户界面控件划分为一个或多个待投屏数据集,其中,每个所述待投屏数据集均不为空集,且各个所述待投屏数据集之间不存在交集;
    基于该接收端的所述设备信息,对各个所述待投屏数据集进行匹配,并将匹配成功的所述待投屏数据集中包含的所述第二待投屏数据,作为该接收端对应的所述第一待投屏数据。
  19. 根据权利要求14或15所述的投屏方法,其特征在于,所述基于所述设备信息对所述第一待投屏数据中的用户界面控件进行排版处理,得到对应的控件布局文件,包括:
    获取所述实时界面的尺寸信息,以及所述第二待投屏数据中用户界面控件在所述实时界面中的位置信息和尺寸信息,并根据所述实时界面的尺寸信息和所述用户界面控件在所述实时界面中的位置信息和尺寸信息,绘制用户界面控件在所述实时界面中对应的分布图;
    识别所述分布图对应的第一排版类型;
    基于所述设备信息,确定所述第一排版类型对应的第二排版类型;
    基于所述第二排版类型,获取所述第二待投屏数据中各个用户界面控件在所述显示界面中对应的相对位置信息和相对尺寸信息,并基于所述相对位置信息和所述相对尺寸信息,生成所述控件布局文件。
  20. 根据权利要求11至19中任意一项所述的投屏方法,其特征在于,还包括:
    接收所述接收端发送的第二坐标信息和事件类型,并根据所述第二坐标信息和所述事件类型,执行对应的事件任务,其中,所述事件类型是由所述接收端在检测到操作事件后,对操作事件进行类型识别得到,所述第二坐标信息,是由所述接收端在获取到所述操作事件在所述接收端屏幕中的第一坐标信息后,对所述第一坐标信息进行 坐标转换处理后得到。
  21. 一种投屏方法,其特征在于,应用于接收端,包括:
    接收发送端发送的第一待投屏数据,所述第一待投屏数据,是所述发送端在获取到待投屏应用程序的实时界面,并提取出所述实时界面内包含的第二待投屏数据后,所述发送端根据所述接收端的设备信息从所述第二待投屏数据中筛选得到的,其中,所述第二待投屏数据包含视频流、音频流和用户界面控件中的至少一种;
    基于所述设备信息对所述第一待投屏数据中的用户界面控件进行排版处理,得到对应的控件布局文件;
    根据所述第一待投屏数据和得到的所述控件布局文件,生成对应的显示界面。
  22. 一种终端设备,其特征在于,所述终端设备包括存储器、处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至10任一项所述方法,或者如权利要求11至20任一项所述方法,或者如权利要求21所述方法。
  23. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述方法,或者如权利要求11至20任一项所述方法,或者如权利要求21所述方法。
PCT/CN2021/076126 2020-02-20 2021-02-09 投屏方法及终端设备 WO2021164631A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/801,005 US11748054B2 (en) 2020-02-20 2021-02-09 Screen projection method and terminal device
EP21757584.4A EP4095671A4 (en) 2020-02-20 2021-02-09 SCREENCASTING PROCEDURE AND TERMINAL

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010107285.4A CN111324327B (zh) 2020-02-20 2020-02-20 投屏方法及终端设备
CN202010107285.4 2020-02-20

Publications (1)

Publication Number Publication Date
WO2021164631A1 true WO2021164631A1 (zh) 2021-08-26

Family

ID=71168804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076126 WO2021164631A1 (zh) 2020-02-20 2021-02-09 投屏方法及终端设备

Country Status (4)

Country Link
US (1) US11748054B2 (zh)
EP (1) EP4095671A4 (zh)
CN (1) CN111324327B (zh)
WO (1) WO2021164631A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071230A (zh) * 2021-09-26 2022-02-18 深圳市酷开网络科技股份有限公司 多端投屏方法、计算机设备及计算机可读存储介质
WO2023030519A1 (zh) * 2021-09-06 2023-03-09 维沃移动通信有限公司 投屏处理方法及相关设备
WO2023045712A1 (zh) * 2021-09-26 2023-03-30 荣耀终端有限公司 投屏异常处理方法及电子设备
CN116193176A (zh) * 2023-02-13 2023-05-30 阿波罗智联(北京)科技有限公司 投屏方法、装置、设备以及存储介质
WO2023103948A1 (zh) * 2021-12-08 2023-06-15 华为技术有限公司 一种显示方法及电子设备
CN116933097A (zh) * 2023-06-27 2023-10-24 广州汽车集团股份有限公司 车辆的变型数据校验方法、装置、设备及存储介质

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324327B (zh) * 2020-02-20 2022-03-25 华为技术有限公司 投屏方法及终端设备
CN114285938B (zh) * 2020-09-27 2022-12-06 华为技术有限公司 一种设备推荐方法、装置及计算机可读存储介质
CN114201128A (zh) 2020-09-02 2022-03-18 华为技术有限公司 一种显示方法及设备
CN113938743B (zh) * 2020-07-08 2023-06-27 华为技术有限公司 一种电子设备间的协同控制方法及系统
CN111857925B (zh) 2020-07-21 2022-05-31 联想(北京)有限公司 一种投屏处理方法及装置
CN112003928B (zh) * 2020-08-21 2021-06-25 深圳市康冠智能科技有限公司 多功能屏幕同步控制方法、装置及设备
CN114168236A (zh) * 2020-09-10 2022-03-11 华为技术有限公司 一种应用接入方法及相关装置
EP4195042A4 (en) * 2020-09-10 2024-01-17 Huawei Tech Co Ltd DISPLAY METHOD AND ELECTRONIC DEVICE
CN114286165B (zh) * 2020-12-21 2023-04-25 海信视像科技股份有限公司 一种显示设备、移动终端、投屏数据传输方法及系统
CN112367422B (zh) * 2020-10-30 2022-07-01 北京数秦科技有限公司 移动终端设备与显示系统的互动方法、装置及存储介质
CN112286477B (zh) * 2020-11-16 2023-12-08 Oppo广东移动通信有限公司 投屏显示方法及相关产品
CN112565839B (zh) * 2020-11-23 2022-11-29 青岛海信传媒网络技术有限公司 投屏图像的显示方法及显示设备
CN114584828B (zh) * 2020-11-30 2024-05-17 上海新微技术研发中心有限公司 安卓投屏方法、计算机可读存储介质和设备
CN114579217A (zh) * 2020-11-30 2022-06-03 上海新微技术研发中心有限公司 一种内容可定义的投屏设备、方法及计算机可读存储介质
CN114584816A (zh) * 2020-11-30 2022-06-03 上海新微技术研发中心有限公司 安卓投屏清晰度设置方法、计算机可读存储介质和设备
CN114691006B (zh) * 2020-12-31 2024-01-23 博泰车联网科技(上海)股份有限公司 一种基于投屏的信息处理方法及相关装置
CN112861638A (zh) * 2021-01-14 2021-05-28 华为技术有限公司 一种投屏方法及装置
CN113242463B (zh) * 2021-03-26 2023-03-03 北京汗粮科技有限公司 一种通过扩展参数增强投屏交互能力的方法
CN115145515A (zh) * 2021-03-31 2022-10-04 华为技术有限公司 一种投屏方法及相关装置
CN115291780A (zh) * 2021-04-17 2022-11-04 华为技术有限公司 一种辅助输入方法、电子设备及系统
CN113301021B (zh) * 2021-04-23 2022-04-22 深圳乐播科技有限公司 一对多投屏方法、系统、设备及存储介质
CN115314584A (zh) * 2021-05-07 2022-11-08 华为技术有限公司 一种音频播放方法、装置和设备
CN113766303B (zh) * 2021-05-08 2023-04-28 北京字节跳动网络技术有限公司 多屏互动方法、装置、设备及存储介质
CN113542859A (zh) * 2021-06-18 2021-10-22 西安万像电子科技有限公司 智能投屏系统及方法
CN113507694B (zh) * 2021-06-18 2024-02-23 厦门亿联网络技术股份有限公司 一种基于无线辅流设备的投屏方法及装置
CN113590248A (zh) * 2021-07-22 2021-11-02 上汽通用五菱汽车股份有限公司 车载终端的投屏方法、装置和可读存储介质
CN113687754A (zh) * 2021-08-10 2021-11-23 深圳康佳电子科技有限公司 应用程序分身投屏显示方法、装置、终端设备及存储介质
CN113778360B (zh) * 2021-08-20 2022-07-22 荣耀终端有限公司 投屏方法和电子设备
CN113810761B (zh) * 2021-09-17 2023-11-21 上海哔哩哔哩科技有限公司 多终端交互方法、装置及系统
CN114040242B (zh) * 2021-09-30 2023-07-07 荣耀终端有限公司 投屏方法、电子设备和存储介质
CN114035973A (zh) * 2021-10-08 2022-02-11 阿波罗智联(北京)科技有限公司 一种应用程序的投屏方法、装置、电子设备及存储介质
CN114153542A (zh) * 2021-11-30 2022-03-08 阿波罗智联(北京)科技有限公司 投屏方法、装置、电子设备及计算机可读存储介质
CN114157903A (zh) * 2021-12-02 2022-03-08 Oppo广东移动通信有限公司 重定向方法、装置、设备、存储介质及程序产品
CN114205664B (zh) * 2021-12-06 2023-09-12 抖音视界有限公司 投屏方法、投屏装置、投屏显示装置、投屏系统及介质
CN114327185B (zh) * 2021-12-29 2024-02-09 盯盯拍(深圳)技术股份有限公司 一种车机屏幕控制方法、装置、介质及电子设备
CN114356264B (zh) * 2021-12-30 2023-12-05 威创集团股份有限公司 一种信号生成方法、装置、设备及可读存储介质
CN115567630B (zh) * 2022-01-06 2023-06-16 荣耀终端有限公司 一种电子设备的管理方法、电子设备及可读存储介质
CN114489550A (zh) * 2022-01-30 2022-05-13 深圳创维-Rgb电子有限公司 投屏控制方法、投屏器及存储介质
CN114979756B (zh) * 2022-05-13 2024-05-07 北京字跳网络技术有限公司 一种实现一分多的投屏独立显示和交互方法、装置及设备
CN115174988B (zh) * 2022-06-24 2024-04-30 长沙联远电子科技有限公司 一种基于dlna的音视频投屏控制方法
CN117992007A (zh) * 2022-11-01 2024-05-07 华为技术有限公司 音频控制方法、存储介质、程序产品及电子设备
CN115633201B (zh) * 2022-12-14 2023-04-11 小米汽车科技有限公司 投屏方法、装置、电子设备及可读存储介质
TWI826203B (zh) * 2022-12-15 2023-12-11 技嘉科技股份有限公司 電腦裝置及顯示裝置
CN117156189A (zh) * 2023-02-27 2023-12-01 荣耀终端有限公司 投屏显示方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248945A (zh) * 2012-02-03 2013-08-14 海尔集团公司 图像传输的方法及系统
CN105791367A (zh) * 2014-12-25 2016-07-20 中国移动通信集团公司 屏幕共享中辅助媒体信息共享方法、系统和相关设备
JP2017076261A (ja) * 2015-10-15 2017-04-20 株式会社オプティム 画面共有システム及び画面共有方法
CN107113352A (zh) * 2014-10-10 2017-08-29 三星电子株式会社 共享屏幕的方法和其电子设备
CN110381195A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏显示方法及电子设备
CN111324327A (zh) * 2020-02-20 2020-06-23 华为技术有限公司 投屏方法及终端设备
CN111399789A (zh) * 2020-02-20 2020-07-10 华为技术有限公司 界面布局方法、装置及系统

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8639812B2 (en) * 2005-04-12 2014-01-28 Belkin International, Inc. Apparatus and system for managing multiple computers
US8054241B2 (en) * 2006-09-14 2011-11-08 Citrix Systems, Inc. Systems and methods for multiple display support in remote access software
US8676937B2 (en) * 2011-05-12 2014-03-18 Jeffrey Alan Rapaport Social-topical adaptive networking (STAN) system allowing for group based contextual transaction offers and acceptances and hot topic watchdogging
US20150046807A1 (en) * 2013-08-07 2015-02-12 Gface Gmbh Asynchronous Rich Media Messaging
US20150294633A1 (en) * 2014-04-15 2015-10-15 Edward K. Y. Jung Life Experience Enhancement Illuminated by Interlinked Communal Connections
US9438872B2 (en) * 2014-09-18 2016-09-06 Coretronic Corporation Projection display system and method for correcting projection region
US9998510B2 (en) * 2015-03-20 2018-06-12 Walter Partos Video-based social interaction system
US10931676B2 (en) * 2016-09-21 2021-02-23 Fyfo Llc Conditional delivery of content over a communication network including social sharing and video conference applications using facial recognition
US10346014B2 (en) * 2016-11-16 2019-07-09 Dell Products L.P. System and method for provisioning a user interface for scaling and tracking
CN107493375B (zh) * 2017-06-30 2020-06-16 北京超卓科技有限公司 移动终端扩展式投屏方法及投屏系统
JP2019016894A (ja) * 2017-07-05 2019-01-31 キヤノン株式会社 表示装置、画像処理装置、およびそれらの制御方法ならびに表示システム
CN107491279A (zh) * 2017-08-15 2017-12-19 深圳市创维群欣安防科技股份有限公司 一种实现移动终端投屏的方法、存储介质及投屏控制设备
CN108804067A (zh) * 2018-06-14 2018-11-13 上海掌门科技有限公司 信息显示方法、设备和计算机可读介质
CN108920937A (zh) * 2018-07-03 2018-11-30 广州视源电子科技股份有限公司 投屏系统、投屏方法和装置
CN109032555A (zh) * 2018-07-06 2018-12-18 广州视源电子科技股份有限公司 投屏中音频数据处理方法、装置、存储介质及电子设备
CN109558105A (zh) * 2018-12-14 2019-04-02 广州视源电子科技股份有限公司 投屏方法、投屏装置、投屏设备
CN109905293B (zh) * 2019-03-12 2021-06-08 北京奇虎科技有限公司 一种终端设备识别方法、系统及存储介质
CN110221798A (zh) * 2019-05-29 2019-09-10 华为技术有限公司 一种投屏方法、系统及相关装置
CN110333836B (zh) * 2019-07-05 2023-08-25 网易(杭州)网络有限公司 信息的投屏方法、装置、存储介质和电子装置
CN110515579A (zh) * 2019-08-28 2019-11-29 北京小米移动软件有限公司 投屏方法、装置、终端及存储介质
US20210319408A1 (en) * 2020-04-09 2021-10-14 Science House LLC Platform for electronic management of meetings

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248945A (zh) * 2012-02-03 2013-08-14 海尔集团公司 图像传输的方法及系统
CN107113352A (zh) * 2014-10-10 2017-08-29 三星电子株式会社 共享屏幕的方法和其电子设备
CN105791367A (zh) * 2014-12-25 2016-07-20 中国移动通信集团公司 屏幕共享中辅助媒体信息共享方法、系统和相关设备
JP2017076261A (ja) * 2015-10-15 2017-04-20 株式会社オプティム 画面共有システム及び画面共有方法
CN110381195A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏显示方法及电子设备
CN111324327A (zh) * 2020-02-20 2020-06-23 华为技术有限公司 投屏方法及终端设备
CN111399789A (zh) * 2020-02-20 2020-07-10 华为技术有限公司 界面布局方法、装置及系统

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030519A1 (zh) * 2021-09-06 2023-03-09 维沃移动通信有限公司 投屏处理方法及相关设备
CN114071230A (zh) * 2021-09-26 2022-02-18 深圳市酷开网络科技股份有限公司 多端投屏方法、计算机设备及计算机可读存储介质
WO2023045712A1 (zh) * 2021-09-26 2023-03-30 荣耀终端有限公司 投屏异常处理方法及电子设备
CN114071230B (zh) * 2021-09-26 2023-10-10 深圳市酷开网络科技股份有限公司 多端投屏方法、计算机设备及计算机可读存储介质
WO2023103948A1 (zh) * 2021-12-08 2023-06-15 华为技术有限公司 一种显示方法及电子设备
CN116193176A (zh) * 2023-02-13 2023-05-30 阿波罗智联(北京)科技有限公司 投屏方法、装置、设备以及存储介质
CN116933097A (zh) * 2023-06-27 2023-10-24 广州汽车集团股份有限公司 车辆的变型数据校验方法、装置、设备及存储介质
CN116933097B (zh) * 2023-06-27 2024-04-26 广州汽车集团股份有限公司 车辆的变型数据校验方法、装置、设备及存储介质

Also Published As

Publication number Publication date
EP4095671A4 (en) 2023-07-26
CN111324327B (zh) 2022-03-25
CN111324327A (zh) 2020-06-23
EP4095671A1 (en) 2022-11-30
US20230108680A1 (en) 2023-04-06
US11748054B2 (en) 2023-09-05

Similar Documents

Publication Publication Date Title
WO2021164631A1 (zh) 投屏方法及终端设备
WO2021164313A1 (zh) 界面布局方法、装置及系统
WO2021057830A1 (zh) 一种信息处理方法及电子设备
WO2021159922A1 (zh) 卡片显示方法、电子设备及计算机可读存储介质
WO2021027476A1 (zh) 语音控制设备的方法及电子设备
WO2021129253A1 (zh) 显示多窗口的方法、电子设备和系统
WO2023051111A1 (zh) 多个应用组合且同时启动多个应用的方法及电子设备
CN112527174B (zh) 一种信息处理方法及电子设备
CN112527222A (zh) 一种信息处理方法及电子设备
WO2022052776A1 (zh) 一种人机交互的方法、电子设备及系统
CN111949782A (zh) 一种信息推荐方法和服务设备
CN111680232A (zh) 页面展示方法、装置、设备以及存储介质
CN112230914A (zh) 小程序的制作方法、装置、终端及存储介质
WO2021052488A1 (zh) 一种信息处理方法及电子设备
CN114564101A (zh) 一种三维界面的控制方法和终端
WO2022194005A1 (zh) 一种跨设备同步显示的控制方法及系统
CN113467663B (zh) 界面配置方法、装置、计算机设备及介质
WO2022105716A1 (zh) 基于分布式控制的相机控制方法及终端设备
WO2022001261A1 (zh) 提示方法及终端设备
WO2022089276A1 (zh) 一种收藏处理的方法及相关装置
WO2023125832A1 (zh) 图片分享方法和电子设备
WO2022227978A1 (zh) 显示方法及相关装置
US20220264176A1 (en) Digital space management method, apparatus, and device
WO2023103948A1 (zh) 一种显示方法及电子设备
WO2023072113A1 (zh) 显示方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21757584

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021757584

Country of ref document: EP

Effective date: 20220822

NENP Non-entry into the national phase

Ref country code: DE