WO2022166618A1 - Screen projection method and electronic device - Google Patents

Screen projection method and electronic device Download PDF

Info

Publication number
WO2022166618A1
WO2022166618A1 PCT/CN2022/073202 CN2022073202W WO2022166618A1 WO 2022166618 A1 WO2022166618 A1 WO 2022166618A1 CN 2022073202 W CN2022073202 W CN 2022073202W WO 2022166618 A1 WO2022166618 A1 WO 2022166618A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
screen projection
application
multimedia content
type
Prior art date
Application number
PCT/CN2022/073202
Other languages
French (fr)
Chinese (zh)
Inventor
陈兰昊
徐世坤
于飞
孟庆吉
杜奕全
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110584296.6A external-priority patent/CN114915834A/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022166618A1 publication Critical patent/WO2022166618A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a screen projection method and electronic device.
  • the screen content displayed by one device When the user has multiple devices, project the screen content displayed by one device to the screen of another device, for example, project the multimedia content, game interface, etc. played by a small-screen device (for example, a mobile phone) to a large-screen device (for example, , computer, smart TV) playback, using the display screen and speaker equipment of the large-screen device to provide users with a better experience.
  • a small-screen device for example, a mobile phone
  • a large-screen device for example, computer, smart TV
  • the screen projection operation is a complex operation, including multiple screen projection scenarios, requiring the user to perform multiple steps, which may bring the user an inconvenient operation experience.
  • the embodiments of the present application provide a screen projection method and an electronic device, when a user needs to perform a screen projection, a convenient screen projection operation is provided for the user, and the user experience is improved.
  • the first device or the second device obtains the sensor data; when the first device or the second device parses the sensor data as the first type of instruction, at least one of the following operations is performed: the first device will The display content is mirrored and projected to the second device, and the first device sends the service data to the second device; when parsing the sensor data as the second type of instruction, at least one of the following operations is performed: the first device mirrors and projects the display content to the screen to the first device; the first device sends service data to the first device; the first device stops mirroring the display content to the second device, and the first device sends a control instruction to the second device.
  • the service data includes at least one of the following items: the name of the multimedia content, the identifier of the multimedia content, the uniform resource locator of the multimedia content, the playback progress of the multimedia content, the playback volume of the multimedia content, and the multimedia content.
  • Type of content the screen projection of the multimedia content can be realized, that is, the second device can play the multimedia content on the second device according to the above-mentioned information.
  • sending the service data by the first device to the second device includes: when the current application of the first device is a video playback application, calling an application program interface of the video playback application, obtaining the service data, The service data is sent to the second device, and the second device continues to play the multimedia content according to the service data.
  • the first device mirrors and projects the content to the second device, and further includes: mirroring and projecting the display content to the second device using the Miracast protocol.
  • Miracast is a protocol for mirroring screens.
  • the method further includes: the first device sets the foreground application associated with the display content as a floating window or a picture-in-picture mode for display.
  • the associated gesture of the first type of instruction is a bottom-to-up space gesture or a three-finger slide or four-finger slide.
  • the associated gesture of the second type of instruction is a top-to-bottom space gesture or a three-finger swipe up or a four-finger swipe up.
  • the first device mirrors and projects the display content to the second device, including: the first device analyzes the current interface through an image, obtains the position of the mirroring control, and simulates a user operation to execute the built-in projection of the application.
  • the screen function is used to mirror and mirror the display content to the second device. In this way, when the application program does not provide an application program interface for screen projection, the screen projection function in the application program can also be used to realize screen projection.
  • the first device sends the service data to the second device, including: the first device analyzes the current interface through an image, obtains the position of the screen projection control, and executes the built-in screen projection function of the application by simulating a user operation, Send service data to the second device.
  • the screen projection function in the application program can also be used to realize screen projection.
  • the first device or the second device stores a database of operation instructions, and it is characterized in that parsing the sensor data includes: comparing the sensor data or the result obtained after processing the sensor data with the data in the database. Comparing, acquiring sensor data corresponds to the first type of instruction or the second type of instruction.
  • the first device is a mobile phone
  • the second device is a large screen
  • the present application provides an electronic device including a memory and one or more processors; wherein the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are executed by the processor , the method of making the electronic device perform screen projection.
  • the present application provides a computer-readable storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the electronic device performs a screen projection method
  • the present application provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute a method for screen projection.
  • the solution proposed in this application can take into account a variety of screen projection scenarios, shield the user from the bottom layer implementation, and unify the implementation logic, so as to provide users with a better and more convenient screen projection experience.
  • FIG. 1 shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application
  • FIGS. 2A-2B show schematic diagrams of usage scenarios provided by embodiments of the present application.
  • FIG. 3 shows a flowchart of a method provided by an embodiment of the present application
  • FIG. 4 shows a flowchart of a method provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • the embodiments of the present application provide a method and device for screen projection, which can be applied to mobile phones, tablet computers, wearable devices (for example, watches, wristbands, helmets, headphones, etc.), in-vehicle devices, and augmented reality (augmented reality, AR)/virtual reality (VR) devices, laptops, ultra-mobile personal computers (UMPCs), netbooks, personal digital assistants (PDAs), smart home devices (eg, smart TVs, smart speakers, smart cameras, etc.) and other electronic devices.
  • augmented reality augmented reality, AR
  • VR virtual reality
  • laptops laptops
  • UMPCs ultra-mobile personal computers
  • PDAs personal digital assistants
  • smart home devices eg, smart TVs, smart speakers, smart cameras, etc.
  • FIG. 1 shows a schematic diagram of the hardware structure of the electronic device 100 .
  • the electronic device 100 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed into a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave and radiate it out through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify the signal, and then convert it into an electromagnetic wave for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the display screen 194 is used to display a display interface of an application, such as a viewfinder interface of a camera application.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • electronic device 100 may include one or more display screens 194 .
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or more cameras 193 .
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area. Wherein, the storage program area can store the operating system, and the software code of at least one application (for example, Huawei video application, wallet, etc.).
  • the storage data area may store data generated during the use of the electronic device 100 (eg, captured images, recorded videos, etc.) and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. Such as saving pictures, videos and other files in an external memory card.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an environmental sensor Light sensor 180L, bone conduction sensor 180M, etc.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. Touch buttons are also possible.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate a charging state, a change in power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card. The SIM card can be connected to and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • FIG. 1 do not constitute a specific limitation on the electronic device 100, and the electronic device 100 may also include more or less components than those shown in the figure, or combine some components, or separate some components components, or a different arrangement of components.
  • the combination/connection relationship between the components in FIG. 1 can also be adjusted and modified.
  • the user can display the content displayed on the first device on the second device, and use hardware such as a display screen and a speaker of the second device to obtain a better content playback experience.
  • the content displayed on the first device may be multimedia content (eg, pictures, videos, audio, etc.), and for example, the content displayed on the first device may be games, application interfaces, and the like.
  • the first device and the second device may be one type of electronic device 100, for example, the first device is a mobile phone, a tablet computer, and the second device is a smart TV, a personal computer, a large screen (referred to as a large screen) )Wait.
  • the first device is a mobile phone, a tablet computer
  • the second device is a smart TV, a personal computer, a large screen (referred to as a large screen) )Wait.
  • Screencasting means that the content on the first device can be displayed on the second device through a certain protocol.
  • the first device optionally projects the multimedia content (such as pictures, videos, audios, etc.) to the second device for display.
  • Common protocols such as DLNA (Digital Living Network Alliance, Digital Living Network Alliance) ) protocol and Chromecast protocol, etc., as shown in Figure 2A, including mobile phone 201 and smart TV 202, the identification (such as link) of multimedia content "Running" to be played by mobile phone 201, playback information (such as playback progress, playback speed, playback sound) etc.) are sent to the smart TV 202, and the smart TV 202 obtains information related to "Running” from the network or database according to the identification (such as a link) and playback information (such as playback progress, playback speed, playback sound, etc.), and plays the multimedia content " “Running” to realize the effect of playing multimedia content from one device to another.
  • DLNA Digital Living Network Alliance, Digital Living Network Alliance
  • Chromecast protocol etc
  • the user can control the playback progress, playback speed, and playback sound of "Running" on the second device.
  • the first device optionally displays the content displayed by the first device on the second device in a mirror image, for example, encodes the content displayed by the first device and transmits it to the second device through a video stream.
  • the second device decodes and plays the video stream, common protocols such as Miracast protocol, Airplay protocol, etc. As shown in FIG.
  • the mobile phone mirrors and projects the content displayed on the mobile phone (the user interface of the video application) to the smart TV 204 , that is, the content displayed on the smart TV 204 and the content displayed on the mobile phone 203
  • the smart TV 204 also changes in response.
  • common user interactions with the electronic device 100 include voice interaction, touch screen gestures, and air gestures.
  • the touch screen gesture refers to the gesture generated by the user and the display screen of the electronic device 100.
  • Common touch screen gestures may include single-finger operations, such as tap (tap), long press (press), pan (pan), double-click ( double click), etc.; it may also include multi-finger operations, such as pinch, three-finger swipe, rotate, and the like.
  • the space gesture refers to a certain distance between the user and the display screen of the electronic device 100.
  • the sensor of the electronic device 100 (such as a camera, a distance sensor, etc.) captures the shape of the user's hand, and compares it with the preset gestures in the database. Then, perform the corresponding operation according to the preset gesture. It can be understood that a gesture is only a way of triggering a function, and the present application does not limit the gesture to be used.
  • the user uses a smart TV to watch videos in the living room, the user needs to go to the balcony/living room to do some things, and the user wants to not terminate the video playback, that is, transfer the video played on the large screen to the mobile phone to continue watching.
  • the user after the user returns to the living room after finishing his work, he also hopes to be able to transfer the video played on the mobile phone to the large screen to continue watching. Therefore, there is a need for a method that can conveniently switch the played video between the first device and the second device, so as to meet the user's requirements for video playback at different times and in different scenarios.
  • the first device and the second device are capable of data interaction.
  • the first device and the second device are optionally in the same local area network, and perform data exchange through the local area network; the first device and the second device can optionally use a point-to-point connection to directly perform data interaction through a P2P channel; the first device and the second device
  • the two devices optionally use data traffic to exchange data through the WAN.
  • This application does not limit the manner of data interaction between the first device and the second device.
  • the screencasting of multimedia content requires multiple devices to perform mutual authentication, so that screencasting is performed in a trusted environment, for example, the first device and the second device log in The same account, such as Huawei account, AppleID, Samsung account, etc.
  • the projection of the multimedia content requires that the application programs of the two devices log in to the same account, for example, the first device and the second device have the same application program installed, or different versions of the same application program. Both a device and a second device are logged on the application, so that the second device can obtain the specified multimedia content from the database according to the received identification (eg, link) of the multimedia content.
  • the implementation includes step 301, step 302 and step 303. It can be understood that step 301, step 302 and step 303 are all optional steps, and step 301, step 302 and step 303 are optional. The execution order of step 302 and step 303 can be adjusted.
  • the first device acquires sensor data (step 301).
  • the gesture when the gesture is a touch screen gesture, the first device acquires touch screen data reported by a display driver corresponding to the display screen 194 .
  • the gesture when the gesture is an air gesture, the first device optionally acquires image data through a camera, and the first device optionally acquires millimeter wave data through a radar sensor.
  • the control command screen projection command
  • the first device acquires the data collected by the microphone 170C.
  • the first device acquires an operation instruction according to the acquired sensor data (step 302).
  • the first device stores a database of sensor data and operation instructions, compares the results obtained by analyzing the sensor data with data in the database of operation instruction data, and obtains the operation instructions. For example, the first device detects data on the touch screen 194 for two consecutive times in a short period of time, and the first device determines that the operation is a double-click operation according to the database of operation instructions; The user's hand moves from a first position close to the upper edge of the image to a second position close to the lower edge of the image, and the first device determines that the operation is a bottom-up gesture (operation) according to the database of operation instructions.
  • the first type of instructions is used to project multimedia content to other devices for playback, for example, to project the multimedia content played by a first device to a second device. Play on the device; wherein the second type of instruction is used to play the multimedia content played on other devices on the first device.
  • the first type of instructions include but are not limited to: voice instructions, touch screen gestures and air gestures that instruct to play multimedia content on the second device; the second type of instructions include but are not limited to: instruct to play multimedia content on Voice commands, touchscreen gestures, and air gestures played on a device.
  • the first type of instruction and the second type of gesture may be associated with a plurality of voice instructions, touch screen gestures, and air gestures.
  • voice instructions, touch screen gestures, and air gestures When one of the voice instructions, touch screen gestures, and air gestures is captured, The corresponding action can be performed.
  • the gesture associated with the first type of instruction can be a gesture of opening the palm from the bottom to the top, or a gesture of touching the touch screen of the mobile phone and sliding up from the bottom of the screen.
  • the gesture can be preset with the system. Different multi-finger gestures, such as three-finger swipe up, four-finger swipe up, prevent conflicts.
  • the gesture associated with the second type of instruction can be an air gesture of opening the palm and moving from top to bottom, or a gesture of touching the touch screen of the mobile phone and sliding down from the top of the screen, which can be a gesture preset with the system Different multi-finger gestures, such as three-finger swipe, four-finger swipe.
  • the first device plays the multimedia content on the second device according to the obtained operation instruction being the first type of instruction (step 303).
  • the first device plays the multimedia content on the first device according to the obtained operation instruction being the second type of instruction (step 304).
  • FIG. 4 is an embodiment of the present application, which is used to introduce a solution for processing multimedia content and screen mirroring by the first device after recognizing the first type of instruction and the second type of instruction.
  • the first device acquires sensor data (step 401 ), and the first device identifies the operation instruction as the first type of instruction according to the acquired sensor data (step 402 ).
  • the above steps have been described in step 301 and step 302, and will not be described in detail here.
  • the first device mirrors the content to the second device (step 403). After the first device recognizes the first type of instruction, it mirrors and projects the displayed content to the second device. For example, the first device mirrors the content to the second device through the Miracast protocol.
  • the first device sends the service data to the second device (step 404), and after parsing the service data, the second device plays the multimedia content.
  • screen projection of content is an operation within an application program
  • the application provides an application program interface for screen projection
  • the first device calls the application program interface provided by the application program to realize screen projection after recognizing it as the first instruction.
  • the first device identifies the current foreground application as a video playback application, and the video playback application has an application program interface for screencasting, the first device calls the screencasting application program interface of the application, and the application program collects and returns business data to the first device.
  • the first device sends service data to a second device, and the second device implements continuous playback of multimedia content according to the received service data.
  • the service data includes one or more of the following options: the name of the multimedia content, such as "Running” shown in Figure 2A; the identification of the multimedia content, such as "Running” in a video playback application The corresponding ID is 12001; the uniform resource locator of the multimedia content, for example, the uniform resource locator corresponding to "Running” is www.video.com/12001; the playback progress of the multimedia content; the playback volume of the multimedia content; the type of the multimedia content For example, the type corresponding to "Running” is video, and the type corresponding to music is audio, etc.
  • the first device sends the service data to the second device (step 404), and after parsing the service data, the second device plays the multimedia content.
  • the first device recognizes that the operation command is the first type of command, it analyzes the controls of the current interface through the image, executes the built-in screen projection function of the application by simulating user operations, and sends the service data to the second device to realize the implementation on the second device. Playback of multimedia content.
  • the first device when the first device recognizes that the operation instruction is the first type of instruction, it first determines whether the foreground application has a screen-casting application program interface, and if so, calls the screen-casting application program interface; Project the interface of the first device to the second device by screen mirroring.
  • the first device when the first device recognizes that the operation instruction is the first type of instruction, it first determines whether the foreground application has a screen-casting application program interface, and if there is a screen-casting application program interface, it calls the screen-casting application program The interface is used for screen projection. If not, it is judged whether the current interface includes a screen projection control. If there is a screen projection control, the user's operation is simulated. By clicking on the control, the multimedia content is projected to the second device. If there is no screen projection control, the interface of the first device is mirrored to the second device by screen mirroring.
  • the first device when the first device adopts DLNA screen projection, even if the first device receives a user operation and returns the first device to the background, because DLNA screen projection only needs to maintain a basic connection channel for transmitting playback information , so playback on the second device doesn't pause.
  • the first device when the first device adopts mirror projection, the first device receives a user operation, returns the first device to the background, and mirror projection will capture the application (such as a desktop application or other application in the foreground at this time) application), resulting in content not originally expected to be played on the second device.
  • the application such as a desktop application or other application in the foreground at this time
  • the first device determines that the first application is mirroring the screen to the second device, and the first device optionally performs the operation of returning to the first application, the first application (the video playback application, or the video playback application's part, or a part obtained by cropping the video playback application) in the form of a floating window, while keeping the life cycle of the floating window in the Resumed stage, and projecting the content of the floating window to the second device; or the first device optionally When performing the operation of returning to the first application, the video being played by the first application is retained in the form of picture-in-picture, and the content of the picture-in-picture is screened to the second device at the same time.
  • the first device when the first device adopts DLNA screen projection, the first device can receive user control commands (for example, fast-forward, fast-rewind, turn up the volume, turn down the volume), and send these control commands through the first device.
  • the connection channel between one device and the second device is transmitted to the second device, so as to control the playback of the multimedia content on the second device.
  • the second device may also accept a user's control command on the multimedia content (for example, fast-forward, fast-rewind, turn up the volume, turn down the volume).
  • the first device when the first device adopts mirror projection and the first device is currently playing multimedia content, the first device can receive user control commands (for example, fast-forward, fast-rewind, increase volume, adjust low volume), and apply the control command to the multimedia content, and at the same time transmit the execution result to the second device, thereby changing the display result of the streaming media on the second device. Meanwhile, on the second device, the user cannot control the playback of the streaming media.
  • user control commands for example, fast-forward, fast-rewind, increase volume, adjust low volume
  • the second device when the second device has a function capable of sensing user operations, for example, the second device has a camera, a radar, etc. that can capture user gestures, the second device optionally pre-stores a gesture library related to broadcast control , after acquiring and recognizing the user gesture, transmits the recognized control result to the first device, and the first device controls the playback of the streaming media according to the control result, thereby changing the display effect of the streaming media on the second device.
  • the second device optionally transmits the captured video of the user gesture or the processing result of the video to the first device, and the first device executes the identified command on the streaming media according to the stored playback-related gesture library, Thus, the display effect of the streaming media on the second device is changed.
  • the second device will determine whether the screen projection of multimedia content or mirror projection is currently being used, and if it is a screen projection of multimedia content, gesture recognition is performed; Perform gesture recognition.
  • the second device will determine whether the screen projection of multimedia content or mirror projection is currently being used, and if it is a screen projection of multimedia content, gesture recognition is performed; Image analysis, it is judged that what is currently playing is a continuous media stream, if yes, gesture recognition is performed, if not, gesture recognition is not performed.
  • the first device acquires sensor data (step 401 ), and the first device identifies the operation instruction as a second type of instruction according to the acquired sensor data (step 405 ).
  • the above steps have been described in step 301 and step 302, and will not be described in detail here.
  • the playback device of the multimedia content is the second device.
  • the first device recognizes that the operation instruction is the second type of instruction, it sends a message for acquiring the multimedia content to the second device.
  • the second device optionally directly mirrors and projects the played content to the first device (step 406).
  • the second device optionally determines whether the interface currently playing multimedia content provides an application program interface for screen projection, and if there is an application program interface for screen projection, it calls the application program interface to obtain service data, and sends the service data. to the first device (step 407 ), if not, mirror and project the played content to the first device.
  • the second device still keeps playing the streaming media after the mirror projection or streaming media projection is completed; in other possible implementations, after the mirror projection is completed, the second device continues to play The streaming media continues to be played in the form of a floating window or picture-in-picture; in some other possible implementations, the second device returns to the main interface or plays other multimedia content after completing the streaming media projection.
  • the playback device of the multimedia content is the first device.
  • the first device recognizes that the operation instruction is the second type of instruction, and determines that the first device is currently mirroring and projecting the screen to the second device, the first device stops mirroring and projecting the screen to the second device (step 408 ).
  • the first device recognizes that the operation command is the second type of command, and judges that it is currently performing screen projection through service data, it sends a message for terminating the playback of the multimedia content to the second device. After the second device receives the data, it pauses or terminates the playback. multimedia content.
  • the first device When the first device recognizes that the operation instruction is the second type of instruction, and determines that there is a record of screencasting to the second device within a certain period of time, it sends a message for terminating the playback to the second device (step 409), and the second device receives After the data, pause or stop playing the multimedia content.
  • the first device or the second device determines the current network quality , if the network quality is better than the threshold, mirror projection is performed at the first resolution, and if the network command is lower than the threshold, projection is performed at the second resolution, where the first resolution is higher than the second resolution.
  • the above-mentioned media stream can also be an audio stream.
  • the first device does not need to display the interface/interface of the first application in the form of a floating window. Display multimedia content in the form of picture-in-picture.
  • the first application has the permission to play in the background, no unnecessary operations are required; when the first application does not have the permission to play in the background, when the first application returns to the background, the life cycle of the first application is maintained .
  • the first device and the second device in the above solution may be used in scenarios of audio calls and video calls.
  • the first device can migrate the video call to the second device after receiving the gesture and recognizing it as the first type of instruction On the device (large screen);
  • the main devices of the video call are the second device and the peer device, the first device can transfer the video call to the first device after receiving the gesture and recognizing it as the second type of instruction.
  • the above solution can satisfy that when the first device/second device does not have audio call software/video call software, it has the video call capability, and can realize the migration of the audio call/video call subject.
  • the first device/second device and the peer device when the first device/second device and the peer device are using a VoIP call, when the first device receives the gesture and parses it into the first type of instruction, it can optionally call The VoIP number of the second device is used to pull the second device into a conference call (session).
  • the call session includes the first device, the second device and the peer device.
  • the first device exits the session, and the transition from the first device to the second device is implemented.
  • the first device still remains in the session and remains in a mute state, so that when the user still needs to switch the subject of the call, it can respond quickly.
  • the first device still remains in the session and in a silent state, and counts the time that it resides in the session, and when the retention time is greater than the first time threshold, the first device exits The session can not only satisfy the user to quickly switch the call subject, but also reduce the power consumption of the first device when the user does not need to use the first device.
  • the first device/second device and the peer device when the first device/second device and the peer device are using a VoIP call, when the first device receives the gesture and parses it into a second type of instruction, it can optionally send a message to the second device.
  • the device sends a call request, where the call request optionally includes fields or information used to identify the first device, such as a VoIP number, a UUID, and the like.
  • the second device pulls the first device into a conference call (session).
  • the call session includes the first device, the second device and the peer device.
  • the second device exits the session session to realize the transition from the second device to the first device.
  • the above method when the first device/second device and the opposite end device use an account-based call, the above method may also be used, which will not be repeated in this application.
  • the first device optionally projects the video to the second device by way of screen projection.
  • the video input source of the first device is still the image captured by the camera of the first device.
  • a close-range user portrait can still be captured.
  • a video transmission channel is established between the first device and the second device, and when the gesture is received and parsed into the first type of instruction, the second device can optionally turn on the camera and transmit the captured image to the first device.
  • the first device transmits the received video stream as a captured image to the peer device.
  • an audio transmission channel is established between the first device and the second device, and when the gesture is received and parsed into the first type of instruction, the second device can optionally turn on the microphone to collect sound, and transmit the captured sound. For the first device, the first device transmits the received audio stream as the collected audio to the peer device. In the above manner, the user can experience a better migration experience, that is, the video call can be completed without picking up the second device.
  • the screen mirroring method is used for the first time, When the first time threshold is reached, the above method of switching the calling subject is adopted.
  • the subject of the call when the subject of the call is the second device and the peer device, when the gesture is received and parsed into a second type of instruction, the subject of the call can be switched from the second device to the first device. no longer discussed.
  • the video call in the above method may also be an audio call, which is not limited in this application.
  • the solution of the present application can also be used for audio projection, for example, projecting the audio (music, FM, etc.) played on the first device to the speaker. It is also optionally implemented with directional gestures for calls (eg, directional gestures on the screen or directional space gestures).
  • the present application provides an electronic device, including a memory and one or more processors; wherein, the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are executed by the processor, the electronic device is made to perform screen projection. Methods.
  • the present application provides a computer-readable storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the electronic device performs a screen projection method.
  • the present application provides a computer program product, which when the computer program product runs on a computer, enables the computer to execute a method for screen projection.
  • the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present application provide a screen projection method and an electronic device. The method comprises: when a user needs to perform screen projection, obtaining sensor data; when the sensor data is identified as an instruction of a first type, executing at least one of mirror projecting content to a second device and sending service data to the second device; when the sensor data is identified as an instruction of a second type, executing at least one of mirror projecting the content to a first device, sending the service data to the first device, stopping mirror projecting the content to the second device, and sending the instruction to the second device. The solution provided in the present application is applicable to various screen projection scenarios, and implementation logic is unified, thereby giving better and more convenient screen projection usage experience to a user.

Description

一种投屏的方法和电子设备Screen projection method and electronic device
本申请要求在2021年2月8日提交中国国家知识产权局、申请号为202110171010.1的中国专利申请的优先权,发明名称为“一种投屏的方法和电子设备”的中国专利申请的优先权,在2021年5月27日提交中国国家知识产权局、申请号为202110584296.6的中国专利申请的优先权,发明名称为“一种投屏的方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110171010.1 submitted to the State Intellectual Property Office of China on February 8, 2021, and the priority of the Chinese patent application with the invention titled "A method and electronic device for mirroring" , the priority of the Chinese patent application filed with the State Intellectual Property Office of China on May 27, 2021 and the application number is 202110584296.6, and the priority of the Chinese patent application with the invention titled "A method and electronic device for mirroring", its The entire contents of this application are incorporated by reference.
技术领域technical field
本申请实施例涉及电子技术领域,尤其涉及一种投屏的方法和电子设备。The embodiments of the present application relate to the field of electronic technologies, and in particular, to a screen projection method and electronic device.
背景技术Background technique
在用户拥有多个设备时,将一个设备显示的屏幕内容投屏到另一个设备的屏幕上,例如将小屏设备(例如,手机)播放的多媒体内容、游戏界面等投射到大屏设备(例如,计算机、智能电视)播放,利用大屏设备的显示屏和音箱设备,提供给用户更好的使用体验。When the user has multiple devices, project the screen content displayed by one device to the screen of another device, for example, project the multimedia content, game interface, etc. played by a small-screen device (for example, a mobile phone) to a large-screen device (for example, , computer, smart TV) playback, using the display screen and speaker equipment of the large-screen device to provide users with a better experience.
现有技术中,投屏操作是一个复杂的操作,包括多种投屏场景,需要用户执行多个步骤,会带给用户不够便捷的操作体验。In the prior art, the screen projection operation is a complex operation, including multiple screen projection scenarios, requiring the user to perform multiple steps, which may bring the user an inconvenient operation experience.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种投屏方法和电子设备,在用户需要进行投屏的时候,给用户提供便捷的投屏操作,提高用户的使用体验。The embodiments of the present application provide a screen projection method and an electronic device, when a user needs to perform a screen projection, a convenient screen projection operation is provided for the user, and the user experience is improved.
为达到上述目的,本申请实施例采用如下技术方案:In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
在一种可能的设计中,第一设备或者第二设备获取传感器数据;当第一设备或者第二设备解析传感器数据为第一类指令时,执行下列操作中的至少一种:第一设备将显示内容镜像投屏至第二设备,第一设备发送业务数据至第二设备;当解析传感器数据为第二类指令时,执行下列操作中的至少一种:第一设备将显示内容镜像投屏至第一设备;第一设备发送业务数据至第一设备;第一设备停止将显示内容镜像投屏至第二设备,第一设备发送控制指令至第二设备。In a possible design, the first device or the second device obtains the sensor data; when the first device or the second device parses the sensor data as the first type of instruction, at least one of the following operations is performed: the first device will The display content is mirrored and projected to the second device, and the first device sends the service data to the second device; when parsing the sensor data as the second type of instruction, at least one of the following operations is performed: the first device mirrors and projects the display content to the screen to the first device; the first device sends service data to the first device; the first device stops mirroring the display content to the second device, and the first device sends a control instruction to the second device.
在一种可能的设计中,业务数据包括以下项中的至少一项:多媒体内容的名称,多媒体内容的标识、多媒体内容的统一资源定位符、多媒体内容的播放进度、多媒体内容的播放音量和多媒体内容的类型。通过这种方式,能够实现多媒体内容的投屏,即第二设备能够跟胡上述信息在第二设备上播放多媒体内容。In a possible design, the service data includes at least one of the following items: the name of the multimedia content, the identifier of the multimedia content, the uniform resource locator of the multimedia content, the playback progress of the multimedia content, the playback volume of the multimedia content, and the multimedia content. Type of content. In this way, the screen projection of the multimedia content can be realized, that is, the second device can play the multimedia content on the second device according to the above-mentioned information.
在一种可能的设计中,第一设备发送业务数据至第二设备,包括:第一设备的当前台应用为视频播放应用时,调取视频播放应用程序的应用程序接口,获取业务数据,将业务数据发送给第二设备,第二设备根据业务数据接续播放多媒体内容。In a possible design, sending the service data by the first device to the second device includes: when the current application of the first device is a video playback application, calling an application program interface of the video playback application, obtaining the service data, The service data is sent to the second device, and the second device continues to play the multimedia content according to the service data.
在一种可能的设计中,第一设备将内容镜像投屏至第二设备,还包括:使用Miracast协议将显示内容镜像投屏至第二设备。其中,Miracast为镜像投屏的协议。In a possible design, the first device mirrors and projects the content to the second device, and further includes: mirroring and projecting the display content to the second device using the Miracast protocol. Among them, Miracast is a protocol for mirroring screens.
在一种可能的设计中,在将显示内容镜像投屏至第二设备之后,还包括:第一设备将和显示内容关联的前台应用设置为悬浮窗或者画中画的方式进行显示。In a possible design, after mirroring and projecting the display content to the second device, the method further includes: the first device sets the foreground application associated with the display content as a floating window or a picture-in-picture mode for display.
在一种可能的设计中,第一类指令的关联手势为从下往上的隔空手势或者三指下滑或者四指下滑。In a possible design, the associated gesture of the first type of instruction is a bottom-to-up space gesture or a three-finger slide or four-finger slide.
在一种可能的设计中,第二类指令的关联手势为从上往下的隔空手势或者三指上滑或者四指上滑。In a possible design, the associated gesture of the second type of instruction is a top-to-bottom space gesture or a three-finger swipe up or a four-finger swipe up.
在一种可能的设计中,第一设备将显示内容镜像投屏至第二设备,包括:第一设备通过图像分析当前界面,得到投屏控件的位置,通过模拟用户操作,执行应用内置的投屏功能,将显示内容镜像投屏至第二设备。通过这种方式,放应用程序没有提供投屏的应用程序接口时,也能够使用应用程序内的投屏功能实现投屏。In a possible design, the first device mirrors and projects the display content to the second device, including: the first device analyzes the current interface through an image, obtains the position of the mirroring control, and simulates a user operation to execute the built-in projection of the application. The screen function is used to mirror and mirror the display content to the second device. In this way, when the application program does not provide an application program interface for screen projection, the screen projection function in the application program can also be used to realize screen projection.
在一种可能的设计中,第一设备发送业务数据至第二设备,包括:第一设备通过图像分析当前界面,得到投屏控件的位置,通过模拟用户操作,执行应用内置的投屏功能,发送业务数据至第二设备。通过这种方式,放应用程序没有提供投屏的应用程序接口时,也能够使用应用程序内的投屏功能实现投屏。In a possible design, the first device sends the service data to the second device, including: the first device analyzes the current interface through an image, obtains the position of the screen projection control, and executes the built-in screen projection function of the application by simulating a user operation, Send service data to the second device. In this way, when the application program does not provide an application program interface for screen projection, the screen projection function in the application program can also be used to realize screen projection.
在一种可能的设计中,第一设备或第二设备存储操作指令的数据库,其特征在于,解析传感器数据,包括:将传感器数据或经过处理传感器数据后得到的结果,与数据库中的数据进行比较,获取传感器数据对应于第一类指令或第二类指令。In a possible design, the first device or the second device stores a database of operation instructions, and it is characterized in that parsing the sensor data includes: comparing the sensor data or the result obtained after processing the sensor data with the data in the database. Comparing, acquiring sensor data corresponds to the first type of instruction or the second type of instruction.
在一种可能的设计中,第一设备为手机,第二设备为大屏。In a possible design, the first device is a mobile phone, and the second device is a large screen.
在一种可能的设计中,本申请提供一种电子设备,包括存储器和一个或多个处理器;其中,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被处理器执行时,使得电子设备执行投屏的方法。In a possible design, the present application provides an electronic device including a memory and one or more processors; wherein the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are executed by the processor , the method of making the electronic device perform screen projection.
在一种可能的设计中,本申请提供一种计算机可读存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行投屏的方法In a possible design, the present application provides a computer-readable storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the electronic device performs a screen projection method
在一种可能的设计中,本申请提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行投屏的方法。In a possible design, the present application provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute a method for screen projection.
本申请提出的方案能够兼顾多种投屏场景,对用户屏蔽底层的实现,统一实现逻辑,给用户更好的、便捷的投屏使用体验。The solution proposed in this application can take into account a variety of screen projection scenarios, shield the user from the bottom layer implementation, and unify the implementation logic, so as to provide users with a better and more convenient screen projection experience.
附图说明Description of drawings
图1示出了本申请实施例提供的电子设备的硬件结构示意图;FIG. 1 shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application;
图2A-2B示出了本申请实施例提供的使用场景示意图;2A-2B show schematic diagrams of usage scenarios provided by embodiments of the present application;
图3示出了本申请实施例提供的方法流程图;FIG. 3 shows a flowchart of a method provided by an embodiment of the present application;
图4示出了本申请实施例提供的方法流程图。FIG. 4 shows a flowchart of a method provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, unless otherwise stated, “/” means or means, for example, A/B can mean A or B; “and/or” in this document is only a description of the associated object The association relationship of , indicates that there can be three kinds of relationships, for example, A and/or B, can indicate that A exists alone, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" refers to two or more than two.
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是 两个或两个以上。Hereinafter, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, a feature defined as "first" or "second" may expressly or implicitly include one or more of that feature. In the description of this embodiment, unless otherwise specified, "plurality" means two or more.
本申请实施例提供了一种投屏的方法和装置,可以应用于诸如手机、平板电脑、可穿戴设备(例如,手表、手环、头盔、耳机等)、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能家居设备(例如,智能电视,智能音箱,智能摄像头等)等电子设备。可以理解的是,本申请实施例对电子设备的具体类型不作任何限制。The embodiments of the present application provide a method and device for screen projection, which can be applied to mobile phones, tablet computers, wearable devices (for example, watches, wristbands, helmets, headphones, etc.), in-vehicle devices, and augmented reality (augmented reality, AR)/virtual reality (VR) devices, laptops, ultra-mobile personal computers (UMPCs), netbooks, personal digital assistants (PDAs), smart home devices (eg, smart TVs, smart speakers, smart cameras, etc.) and other electronic devices. It can be understood that the embodiments of the present application do not impose any limitations on the specific types of electronic devices.
示例性的,图1示出了电子设备100的硬件结构示意图。如图1所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。Exemplarily, FIG. 1 shows a schematic diagram of the hardware structure of the electronic device 100 . As shown in FIG. 1 , the electronic device 100 may include a processor 110 , an external memory interface 120 , an internal memory 121 , a universal serial bus (USB) interface 130 , a charging management module 140 , a power management module 141 , and a battery 142 , Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. The controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions. A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。The USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. The charging management module 140 is used to receive charging input from the charger. The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like. Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, the antenna 1 can be multiplexed into a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制 解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like. The mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave and radiate it out through the antenna 1. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR). The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify the signal, and then convert it into an electromagnetic wave for radiation through the antenna 2 .
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
显示屏194用于显示应用的显示界面,例如相机应用的取景界面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或多个显示屏194。The display screen 194 is used to display a display interface of an application, such as a viewfinder interface of a camera application. Display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, electronic device 100 may include one or more display screens 194 .
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used to process the data fed back by the camera 193 . For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或多个摄像头193。 Camera 193 is used to capture still images or video. The object is projected through the lens to generate an optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats of image signals. In some embodiments, the electronic device 100 may include one or more cameras 193 .
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。A digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process the input information, and can continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,以及至少一个应用程序(例如华为视频应用,钱包等)的软件代码等。存储数据区可存储电子设备100使用过程中所产生的数据(例如拍摄的图像、录制的视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。 Internal memory 121 may be used to store computer executable program code, which includes instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 . The internal memory 121 may include a storage program area and a storage data area. Wherein, the storage program area can store the operating system, and the software code of at least one application (for example, Huawei video application, wallet, etc.). The storage data area may store data generated during the use of the electronic device 100 (eg, captured images, recorded videos, etc.) and the like. In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device. The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. Such as saving pictures, videos and other files in an external memory card.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an environmental sensor Light sensor 180L, bone conduction sensor 180M, etc.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现与电子设备100的接触和分离。The keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. Touch buttons are also possible. The electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 . Motor 191 can generate vibrating cues. The motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization. The indicator 192 can be an indicator light, which can be used to indicate a charging state, a change in power, and can also be used to indicate a message, a missed call, a notification, and the like. The SIM card interface 195 is used to connect a SIM card. The SIM card can be connected to and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
可以理解的是,图1所示的部件并不构成对电子设备100的具体限定,电子设备100还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。此外,图1中的部件之间的组合/连接关系也是可以调整修改的。It can be understood that the components shown in FIG. 1 do not constitute a specific limitation on the electronic device 100, and the electronic device 100 may also include more or less components than those shown in the figure, or combine some components, or separate some components components, or a different arrangement of components. In addition, the combination/connection relationship between the components in FIG. 1 can also be adjusted and modified.
在一些可能的实施方案中,用户可以将显示在第一设备上的内容在第二设备上展示,利用第二设备的显示屏和扬声器等硬件,获得更好的内容播放体验。例如在第一设备上显示的内容可以是多媒体内容(例如图片、视频、音频等),又例如在第一设备上显示的内容可以是游戏、应用界面等。In some possible implementations, the user can display the content displayed on the first device on the second device, and use hardware such as a display screen and a speaker of the second device to obtain a better content playback experience. For example, the content displayed on the first device may be multimedia content (eg, pictures, videos, audio, etc.), and for example, the content displayed on the first device may be games, application interfaces, and the like.
在一些可能的实施方案中,第一设备和第二设备可以是电子设备100的一种,例如第一设备是手机、平板电脑,第二设备是智能电视、个人计算机、大型屏幕(简称大屏)等。In some possible implementations, the first device and the second device may be one type of electronic device 100, for example, the first device is a mobile phone, a tablet computer, and the second device is a smart TV, a personal computer, a large screen (referred to as a large screen) )Wait.
投屏指通过某种协议,使得第一设备上的内容能够在第二设备上展示。在一些可能的实施方案中,第一设备可选地将多媒体内容(例如图片、视频、音频等)投屏到第二设备上展 示,常见的协议如DLNA(Digital Living Network Alliance,数字生活网络联盟)协议和Chromecast协议等,如图2A所示,包括手机201和智能电视202,手机201将播放的多媒体内容《跑步》的标识(例如链接)、播放信息(例如播放进度、播放倍速、播放声音等)发送至智能电视202,智能电视202根据标识(例如链接)、播放信息(例如播放进度、播放倍速、播放声音等)从网络或数据库中获取《跑步》相关的信息,并播放多媒体内容《跑步》,实现多媒体内容从一个设备上播放到另一个设备上播放的效果,此时用户能够在第二设备上控制《跑步》的播放进度、播放倍速、播放声音等。在另一些可能的实施方案中,第一设备可选地将第一设备显示的内容镜像地在第二设备上展示,例如将第一设备显示的内容编码后通过视频流的方式传递到第二设备,第二设备解码后播放视频流,常见的协议如Miracast协议、Airplay协议等。如图2B所示,包括手机203和智能电视204,手机将手机上显示的内容(视频应用程序的用户界面)镜像投屏至智能电视204,即智能电视204上显示的内容和手机203上显示的保持一致,当手机203上的用户界面发生变化后,智能电视204也会做响应的改变。Screencasting means that the content on the first device can be displayed on the second device through a certain protocol. In some possible implementations, the first device optionally projects the multimedia content (such as pictures, videos, audios, etc.) to the second device for display. Common protocols such as DLNA (Digital Living Network Alliance, Digital Living Network Alliance) ) protocol and Chromecast protocol, etc., as shown in Figure 2A, including mobile phone 201 and smart TV 202, the identification (such as link) of multimedia content "Running" to be played by mobile phone 201, playback information (such as playback progress, playback speed, playback sound) etc.) are sent to the smart TV 202, and the smart TV 202 obtains information related to "Running" from the network or database according to the identification (such as a link) and playback information (such as playback progress, playback speed, playback sound, etc.), and plays the multimedia content " "Running" to realize the effect of playing multimedia content from one device to another. At this time, the user can control the playback progress, playback speed, and playback sound of "Running" on the second device. In some other possible implementations, the first device optionally displays the content displayed by the first device on the second device in a mirror image, for example, encodes the content displayed by the first device and transmits it to the second device through a video stream. device, the second device decodes and plays the video stream, common protocols such as Miracast protocol, Airplay protocol, etc. As shown in FIG. 2B , including a mobile phone 203 and a smart TV 204 , the mobile phone mirrors and projects the content displayed on the mobile phone (the user interface of the video application) to the smart TV 204 , that is, the content displayed on the smart TV 204 and the content displayed on the mobile phone 203 When the user interface on the mobile phone 203 changes, the smart TV 204 also changes in response.
在一些可能的实施方案中,常见的用户与电子设备100的交互方式有:语音交互、触屏手势和隔空手势等。其中,触屏手势指用户与电子设备100的显示屏通过接触产生手势,常见的触屏手势可以包括单指操作,例如轻触(tap)、长按(press)、平移(pan)、双击(double click)等;还可以包括多指操作,例如捏合(pinch)、三指滑动(three-finger swipe)、旋转(rotate)等。其中,隔空手势指用户与电子设备100的显示屏存在一定的距离,通过电子设备100的传感器(例如摄像头、距离传感器等),捕捉用户手的形态,通过和数据库中的预设手势比对后,根据预设手势执行对应的操作。可以理解的是,手势只是功能的触发方式,本申请对使用的手势不做限定。In some possible implementations, common user interactions with the electronic device 100 include voice interaction, touch screen gestures, and air gestures. Among them, the touch screen gesture refers to the gesture generated by the user and the display screen of the electronic device 100. Common touch screen gestures may include single-finger operations, such as tap (tap), long press (press), pan (pan), double-click ( double click), etc.; it may also include multi-finger operations, such as pinch, three-finger swipe, rotate, and the like. The space gesture refers to a certain distance between the user and the display screen of the electronic device 100. The sensor of the electronic device 100 (such as a camera, a distance sensor, etc.) captures the shape of the user's hand, and compares it with the preset gestures in the database. Then, perform the corresponding operation according to the preset gesture. It can be understood that a gesture is only a way of triggering a function, and the present application does not limit the gesture to be used.
在一些可能的实施方案中,用户在客厅中使用智能电视观看视频,用户需要去阳台/客厅做一些事情,用户希望不终止视频的播放,即将大屏上播放的视频转移到手机上继续观看。在一些可能的实施方案中,用户做完事情后,回到客厅,也会希望能够将手机上播放的视频转移到大屏上继续观看。因此,需要一种方法,能够将播放的视频,在第一设备和第二设备上便捷切换,满足用户在不同的时间、不同的场景对视频播放的需求。In some possible implementations, the user uses a smart TV to watch videos in the living room, the user needs to go to the balcony/living room to do some things, and the user wants to not terminate the video playback, that is, transfer the video played on the large screen to the mobile phone to continue watching. In some possible implementations, after the user returns to the living room after finishing his work, he also hopes to be able to transfer the video played on the mobile phone to the large screen to continue watching. Therefore, there is a need for a method that can conveniently switch the played video between the first device and the second device, so as to meet the user's requirements for video playback at different times and in different scenarios.
在一些可能的实施方案中,第一设备和第二设备能够进行数据交互。此时第一设备和第二设备可选地处于同一局域网中,通过局域网进行数据交互;第一设备和第二设备可选地使用点对点连接,通过P2P通道直接进行数据交互;第一设备和第二设备可选地使用数据流量,通过广域网进行数据交互。本申请对第一设备和第二设备间数据交互的方式不做限定。In some possible embodiments, the first device and the second device are capable of data interaction. At this time, the first device and the second device are optionally in the same local area network, and perform data exchange through the local area network; the first device and the second device can optionally use a point-to-point connection to directly perform data interaction through a P2P channel; the first device and the second device The two devices optionally use data traffic to exchange data through the WAN. This application does not limit the manner of data interaction between the first device and the second device.
在一些可能的实施方案中,对多媒体内容(也可以成为流媒体)的投屏,需要多个设备进行相互认证,使得投屏在可信的环境下进行,例如第一设备和第二设备登陆同一账户,例如华为账户、AppleID、三星账户等。在另一些可能的实施方案中,对多媒体内容的投屏,需要两个设备的应用程序登陆同一账户,例如第一设备和第二设备安装有同一应用程序、或者同一应用程序的不同版本,第一设备和第二设备均在应用程序上登陆,使得第二设备能够根据接收到的多媒体内容的标识(例如链接)从数据库中获取指定的多媒体内容。In some possible implementations, the screencasting of multimedia content (which can also become streaming media) requires multiple devices to perform mutual authentication, so that screencasting is performed in a trusted environment, for example, the first device and the second device log in The same account, such as Huawei account, AppleID, Samsung account, etc. In some other possible implementations, the projection of the multimedia content requires that the application programs of the two devices log in to the same account, for example, the first device and the second device have the same application program installed, or different versions of the same application program. Both a device and a second device are logged on the application, so that the second device can obtain the specified multimedia content from the database according to the received identification (eg, link) of the multimedia content.
在一些可能的实施方案中,如图3所示,实施方案包括步骤301、步骤302和步骤303,可以理解的是,步骤301、步骤302和步骤303均是可选地步骤,且步骤301、步骤302和步骤303的执行顺序是可以调整的。In some possible implementations, as shown in FIG. 3, the implementation includes step 301, step 302 and step 303. It can be understood that step 301, step 302 and step 303 are all optional steps, and step 301, step 302 and step 303 are optional. The execution order of step 302 and step 303 can be adjusted.
第一设备获取传感器数据(步骤301)。在一些可能的实施方案中,当手势为触屏手势时,第一设备获取显示屏194对应的显示驱动上报的触屏数据。在另一些可能的实施方案中,当 手势为隔空手势时,第一设备可选地通过摄像头获取图像数据,第一设备可选地通过雷达传感器获取毫米波数据。在另一些可能的实施方案中,当控制指令(投屏指令)为语音指令时,第一设备获取麦克风170C采集的数据。The first device acquires sensor data (step 301). In some possible implementations, when the gesture is a touch screen gesture, the first device acquires touch screen data reported by a display driver corresponding to the display screen 194 . In some other possible implementations, when the gesture is an air gesture, the first device optionally acquires image data through a camera, and the first device optionally acquires millimeter wave data through a radar sensor. In some other possible implementations, when the control command (screen projection command) is a voice command, the first device acquires the data collected by the microphone 170C.
第一设备根据获取的传感器数据,获取操作指令(步骤302)。在一些可能的实施方案中,第一设备存储传感器数据和操作指令的数据库,将分析传感器数据得到的结果和操作指令数据的数据库中的数据进行比较,获取操作指令。例如第一设备检测到短时间内连续两次对触摸屏194的数据,第一设备根据操作指令的数据库判定该操作为双击操作;又例如,第一设备通过分析摄像头193捕捉的多帧图像中,用户的手从靠近图像上边缘的第一位置,移动到靠近图像下边缘的第二位置,第一设备根据操作指令的数据库判定该操作为从下往上的隔空手势(操作)。The first device acquires an operation instruction according to the acquired sensor data (step 302). In some possible implementations, the first device stores a database of sensor data and operation instructions, compares the results obtained by analyzing the sensor data with data in the database of operation instruction data, and obtains the operation instructions. For example, the first device detects data on the touch screen 194 for two consecutive times in a short period of time, and the first device determines that the operation is a double-click operation according to the database of operation instructions; The user's hand moves from a first position close to the upper edge of the image to a second position close to the lower edge of the image, and the first device determines that the operation is a bottom-up gesture (operation) according to the database of operation instructions.
在本申请实施例中,和本申请关联的操作指令主要有两类,其中第一类指令用于将多媒体内容投射到其他设备上播放,例如将第一设备的播放的多媒体内容投射到第二设备上播放;其中第二类指令用于将在其他设备上播放的多媒体内容在第一设备上播放。其中,第一类指令包括但不局限于:指示将多媒体内容在第二设备上播放的语音指令、触屏手势和隔空手势;第二类指令包括但不局限于:指示将多媒体内容在第一设备上播放的语音指令、触屏手势和隔空手势。In the embodiments of this application, there are mainly two types of operation instructions related to this application. The first type of instructions is used to project multimedia content to other devices for playback, for example, to project the multimedia content played by a first device to a second device. Play on the device; wherein the second type of instruction is used to play the multimedia content played on other devices on the first device. Wherein, the first type of instructions include but are not limited to: voice instructions, touch screen gestures and air gestures that instruct to play multimedia content on the second device; the second type of instructions include but are not limited to: instruct to play multimedia content on Voice commands, touchscreen gestures, and air gestures played on a device.
在一些可能的实施方案中,第一类指令、第二类手势可以和多个语音指令、触屏手势以及隔空手势关联,当捕获到其中一个语音指令、触屏手势以及隔空手势时,即可执行响应的操作。In some possible implementations, the first type of instruction and the second type of gesture may be associated with a plurality of voice instructions, touch screen gestures, and air gestures. When one of the voice instructions, touch screen gestures, and air gestures is captured, The corresponding action can be performed.
在一些可能的实施方案中,在手机和大屏交互的场景中,用户往往手势手机,面向大屏。因此第一类指令关联的手势可以为手掌张开、从下往上的隔空手势,还可以为和手机触控屏接触、从屏幕下方往上滑动的手势,该手势可以为与系统预设手势不同的多指手势,例如三指上滑,四指上滑,防止发生冲突。第二类指令关联的手势可以为手掌张开、从上往下的隔空手势,还可以为和手机触控屏接触、从屏幕上方往下滑动的手势,该手势可以为与系统预设手势不同的多指手势,例如三指下滑,四指下滑。In some possible implementations, in a scenario of interaction between a mobile phone and a large screen, the user often gestures to the mobile phone and faces the large screen. Therefore, the gesture associated with the first type of instruction can be a gesture of opening the palm from the bottom to the top, or a gesture of touching the touch screen of the mobile phone and sliding up from the bottom of the screen. The gesture can be preset with the system. Different multi-finger gestures, such as three-finger swipe up, four-finger swipe up, prevent conflicts. The gesture associated with the second type of instruction can be an air gesture of opening the palm and moving from top to bottom, or a gesture of touching the touch screen of the mobile phone and sliding down from the top of the screen, which can be a gesture preset with the system Different multi-finger gestures, such as three-finger swipe, four-finger swipe.
第一设备根据获取的操作指令为第一类指令,将多媒体内容在第二设备上播放(步骤303)。第一设备根据获取的操作指令为第二类指令,将多媒体内容在第一设备上播放(步骤304)。The first device plays the multimedia content on the second device according to the obtained operation instruction being the first type of instruction (step 303). The first device plays the multimedia content on the first device according to the obtained operation instruction being the second type of instruction (step 304).
图4为本申请的一个实施例,用于介绍第一设备在识别第一类指令和第二类指令后,对多媒体内容、屏幕镜像的处理方案。FIG. 4 is an embodiment of the present application, which is used to introduce a solution for processing multimedia content and screen mirroring by the first device after recognizing the first type of instruction and the second type of instruction.
第一设备获取传感器数据(步骤401),第一设备根据获取的传感器数据,识别操作指令为第一类指令(步骤402)。上述步骤已经在步骤301和步骤302中进行描述,此处不再详细展开。The first device acquires sensor data (step 401 ), and the first device identifies the operation instruction as the first type of instruction according to the acquired sensor data (step 402 ). The above steps have been described in step 301 and step 302, and will not be described in detail here.
在一些可能的实施方案中,第一设备将内容镜像投屏至第二设备(步骤403)。第一设备识别为第一类指令后,将显示的内容镜像投屏至第二设备,例如第一设备通过Miracast协议将内容投屏至第二设备。In some possible implementations, the first device mirrors the content to the second device (step 403). After the first device recognizes the first type of instruction, it mirrors and projects the displayed content to the second device. For example, the first device mirrors the content to the second device through the Miracast protocol.
在一些可能的实施方案中,第一设备将业务数据发送至第二设备(步骤404),第二设备解析业务数据后,播放多媒体内容。由于内容(多媒体内容)的投屏是一个应用程序内的操作,若应用提供投屏的应用程序接口,第一设备在识别为第一指令后,调用应用程序提供的应用程序接口实现投屏。例如,第一设备识别当前前台的应用为视频播放应用,且该视频播放应用存在投屏的应用程序接口,第一设备调用该应用的投屏应用程序接口,应用程序收集 并返回业务数据给第一设备,第一设备将业务数据发送给第二设备,第二设备根据接收到的业务数据实现多媒体内容的接续播放。In some possible implementations, the first device sends the service data to the second device (step 404), and after parsing the service data, the second device plays the multimedia content. Since screen projection of content (multimedia content) is an operation within an application program, if the application provides an application program interface for screen projection, the first device calls the application program interface provided by the application program to realize screen projection after recognizing it as the first instruction. For example, the first device identifies the current foreground application as a video playback application, and the video playback application has an application program interface for screencasting, the first device calls the screencasting application program interface of the application, and the application program collects and returns business data to the first device. In one device, the first device sends service data to a second device, and the second device implements continuous playback of multimedia content according to the received service data.
在一些可能的实施方案中,业务数据包括下面选项中的一个或多个:多媒体内容的名称,例如图2A中示出的《跑步》;多媒体内容的标识,例如《跑步》在视频播放应用中对应的身份标识为12001;多媒体内容的统一资源定位符,例如《跑步》对应的统一资源定位符为www.video.com/12001;多媒体内容的播放进度;多媒体内容的播放音量;多媒体内容的类型,例如《跑步》对应的类型为视频,音乐对应的类型为音频等。In some possible implementations, the service data includes one or more of the following options: the name of the multimedia content, such as "Running" shown in Figure 2A; the identification of the multimedia content, such as "Running" in a video playback application The corresponding ID is 12001; the uniform resource locator of the multimedia content, for example, the uniform resource locator corresponding to "Running" is www.video.com/12001; the playback progress of the multimedia content; the playback volume of the multimedia content; the type of the multimedia content For example, the type corresponding to "Running" is video, and the type corresponding to music is audio, etc.
在一些可能的实施方案中,第一设备将业务数据发送至第二设备(步骤404),第二设备解析业务数据后,播放多媒体内容。第一设备识别操作指令为第一类指令时,通过图像分析当前界面的控件,通过模拟用户操作,执行应用的内置投屏功能,将业务数据发送至第二设备,实现在第二设备上实现多媒体内容的播放。In some possible implementations, the first device sends the service data to the second device (step 404), and after parsing the service data, the second device plays the multimedia content. When the first device recognizes that the operation command is the first type of command, it analyzes the controls of the current interface through the image, executes the built-in screen projection function of the application by simulating user operations, and sends the service data to the second device to realize the implementation on the second device. Playback of multimedia content.
在一些可能的实施方案中,第一设备识别操作指令为第一类指令时,首先判断前台应用是否具有投屏的应用程序接口,若有,则调用投屏的应用程序接口,若无,则将第一设备的界面通过屏幕镜像的方式投屏至第二设备。In some possible implementations, when the first device recognizes that the operation instruction is the first type of instruction, it first determines whether the foreground application has a screen-casting application program interface, and if so, calls the screen-casting application program interface; Project the interface of the first device to the second device by screen mirroring.
在一些可能的实施方案中,第一设备识别操作指令为第一类指令时,首先判断前台应用是否具有投屏的应用程序接口,若有投屏的应用程序接口,则调用投屏的应用程序接口进行投屏,若无,则判断当前界面是否包括投屏的控件,若有投屏的控件,则模拟用户的操作,通过点击控件,将多媒体内容投屏至第二设备,若没有投屏的控件,则将第一设备的界面通过屏幕镜像的方式投屏至第二设备。In some possible implementations, when the first device recognizes that the operation instruction is the first type of instruction, it first determines whether the foreground application has a screen-casting application program interface, and if there is a screen-casting application program interface, it calls the screen-casting application program The interface is used for screen projection. If not, it is judged whether the current interface includes a screen projection control. If there is a screen projection control, the user's operation is simulated. By clicking on the control, the multimedia content is projected to the second device. If there is no screen projection control, the interface of the first device is mirrored to the second device by screen mirroring.
在一些可能的实施方案中,在第一设备采用DLNA投屏时,即使第一设备接收用户操作,将第一设备退到后台,由于DLNA投屏只需要维持基本的连接通道用于传输播放信息,所以第二设备上的播放不会暂停。In some possible implementations, when the first device adopts DLNA screen projection, even if the first device receives a user operation and returns the first device to the background, because DLNA screen projection only needs to maintain a basic connection channel for transmitting playback information , so playback on the second device doesn't pause.
在一些可能的实施方案中,在第一设备采用镜像投屏时,第一设备接收用户操作,将第一设备退到后台,镜像投屏将捕捉此时处于前台的应用(例如桌面应用或其他应用),造成第二设备上播放的非原来预计播放的内容。此时,第一设备判断第一应用在向第二设备镜像投屏,第一设备可选地在执行返回第一应用的操作时,将第一应用(视频播放应用,或者是视频播放应用的一部分,或者是裁剪视频播放应用得到的一部分)以悬浮窗的形式保留,同时保持悬浮窗的生命周期处于Resumed阶段,并将悬浮窗的内容投屏至第二设备;或者第一设备可选地在执行返回第一应用的操作时,将第一应用在播放的视频以画中画的形式保留,同时将画中画的内容投屏至第二设备。In some possible implementations, when the first device adopts mirror projection, the first device receives a user operation, returns the first device to the background, and mirror projection will capture the application (such as a desktop application or other application in the foreground at this time) application), resulting in content not originally expected to be played on the second device. At this time, the first device determines that the first application is mirroring the screen to the second device, and the first device optionally performs the operation of returning to the first application, the first application (the video playback application, or the video playback application's part, or a part obtained by cropping the video playback application) in the form of a floating window, while keeping the life cycle of the floating window in the Resumed stage, and projecting the content of the floating window to the second device; or the first device optionally When performing the operation of returning to the first application, the video being played by the first application is retained in the form of picture-in-picture, and the content of the picture-in-picture is screened to the second device at the same time.
在一些可能的实施方案中,在第一设备采用DLNA投屏时,第一设备能够接收用户的控制命令(例如快进、快退、调大音量、调小音量),将这些控制命令通过第一设备和第二设备的连接通道传输至第二设备,从而控制第二设备上的多媒体内容的播放。同时第二设备也可以接受用户对多媒体内容的控制命令(例如快进、快退、调大音量、调小音量)。In some possible implementations, when the first device adopts DLNA screen projection, the first device can receive user control commands (for example, fast-forward, fast-rewind, turn up the volume, turn down the volume), and send these control commands through the first device. The connection channel between one device and the second device is transmitted to the second device, so as to control the playback of the multimedia content on the second device. At the same time, the second device may also accept a user's control command on the multimedia content (for example, fast-forward, fast-rewind, turn up the volume, turn down the volume).
在一些可能的实施方案中,在第一设备采用镜像投屏时,且第一设备当前播放多媒体内容时,第一设备能够接收用户的控制命令(例如快进、快退、调大音量、调小音量),并将控制命令作用于多媒体内容,同时将执行的结果传输至第二设备,从而改变第二设备上的流媒体的显示结果。同时在第二设备上,用户不能够控制流媒体的播放。In some possible implementations, when the first device adopts mirror projection and the first device is currently playing multimedia content, the first device can receive user control commands (for example, fast-forward, fast-rewind, increase volume, adjust low volume), and apply the control command to the multimedia content, and at the same time transmit the execution result to the second device, thereby changing the display result of the streaming media on the second device. Meanwhile, on the second device, the user cannot control the playback of the streaming media.
在一些可能的实施方案中,当第二设备具有能够感知用户操作的功能时,例如第二设备具有可以捕捉用户手势的摄像头、雷达等,第二设备可选地预存和播控相关的手势库,在获取并识别用户手势后,将识别的控制结果传输给第一设备,第一设备根据控制结果控制流媒 体的播放,从而改变第二设备上流媒体的显示效果。在另一些实施方案中,第二设备可选地将捕获的用户手势的视频或视频的处理结果传输至第一设备,第一设备根据存储的播放相关的手势库,对流媒体执行识别的命令,从而改变第二设备上流媒体的显示效果。In some possible implementations, when the second device has a function capable of sensing user operations, for example, the second device has a camera, a radar, etc. that can capture user gestures, the second device optionally pre-stores a gesture library related to broadcast control , after acquiring and recognizing the user gesture, transmits the recognized control result to the first device, and the first device controls the playback of the streaming media according to the control result, thereby changing the display effect of the streaming media on the second device. In other embodiments, the second device optionally transmits the captured video of the user gesture or the processing result of the video to the first device, and the first device executes the identified command on the streaming media according to the stored playback-related gesture library, Thus, the display effect of the streaming media on the second device is changed.
可选地,第二设备在执行手势识别之前,会判断当前使用的是多媒体内容的投屏还是镜像投屏,若为多媒体内容的投屏,则执行手势识别;若为镜像投屏,则不执行手势识别。Optionally, before performing gesture recognition, the second device will determine whether the screen projection of multimedia content or mirror projection is currently being used, and if it is a screen projection of multimedia content, gesture recognition is performed; Perform gesture recognition.
可选地,第二设备在执行手势识别之前,会判断当前使用的是多媒体内容的投屏还是镜像投屏,若为多媒体内容的投屏,则执行手势识别;若为镜像投屏,则通过图像分析,判断当前播放的是连续的媒体流,若是,则执行手势识别,若否,则不执行手势识别。Optionally, before performing gesture recognition, the second device will determine whether the screen projection of multimedia content or mirror projection is currently being used, and if it is a screen projection of multimedia content, gesture recognition is performed; Image analysis, it is judged that what is currently playing is a continuous media stream, if yes, gesture recognition is performed, if not, gesture recognition is not performed.
第一设备获取传感器数据(步骤401),第一设备根据获取的传感器数据,识别操作指令为第二类指令(步骤405)。上述步骤已经在步骤301和步骤302中进行描述,此处不再详细展开。The first device acquires sensor data (step 401 ), and the first device identifies the operation instruction as a second type of instruction according to the acquired sensor data (step 405 ). The above steps have been described in step 301 and step 302, and will not be described in detail here.
在一些可能的实施方案中,多媒体内容的播放设备是第二设备。第一设备识别操作指令为第二类指令时,发送用于获取多媒体内容的消息至第二设备。第二设备收到消息后,可选地直接将播放的内容镜像投屏至第一设备(步骤406)。第二设备收到消息后,可选地判断当前播放多媒体内容的界面是否提供投屏的应用程序接口,若有投屏的应用程序接口,则调用应用程序接口获取业务数据,并将业务数据发送至第一设备(步骤407),若无,则将播放的内容镜像投屏至第一设备。In some possible embodiments, the playback device of the multimedia content is the second device. When the first device recognizes that the operation instruction is the second type of instruction, it sends a message for acquiring the multimedia content to the second device. After receiving the message, the second device optionally directly mirrors and projects the played content to the first device (step 406). After receiving the message, the second device optionally determines whether the interface currently playing multimedia content provides an application program interface for screen projection, and if there is an application program interface for screen projection, it calls the application program interface to obtain service data, and sends the service data. to the first device (step 407 ), if not, mirror and project the played content to the first device.
在一些可能的实施方案中,第二设备在完成镜像投屏或者流媒体投屏后,仍然保持流媒体的播放;在另一些可能的实施方案中,第二设备在完成镜像投屏后,以悬浮窗或者画中画的形式继续播放流媒体;在另一些可能的实施方案中,第二设备在完成流媒体投屏后,回到主界面或者播放其他多媒体内容。In some possible implementations, the second device still keeps playing the streaming media after the mirror projection or streaming media projection is completed; in other possible implementations, after the mirror projection is completed, the second device continues to play The streaming media continues to be played in the form of a floating window or picture-in-picture; in some other possible implementations, the second device returns to the main interface or plays other multimedia content after completing the streaming media projection.
在一些可能的实施方案中,多媒体内容的播放设备是第一设备。第一设备识别操作指令为第二类指令时,判断当前处于第一设备向第二设备镜像投屏时,第一设备停止镜像投屏至第二设备(步骤408)。第一设备识别操作指令为第二类指令时,判断当前处于通过业务数据进行投屏时,发送用于终止播放多媒体内容的消息给第二设备,第二设备收到数据后,暂停或终止播放多媒体内容。第一设备识别操作指令为第二类指令时,判断一定时间段内存在向第二设备投屏的记录时,发送用于终止播放的消息至第二设备(步骤409),第二设备收到数据后,暂停或终止播放多媒体内容。In some possible embodiments, the playback device of the multimedia content is the first device. When the first device recognizes that the operation instruction is the second type of instruction, and determines that the first device is currently mirroring and projecting the screen to the second device, the first device stops mirroring and projecting the screen to the second device (step 408 ). When the first device recognizes that the operation command is the second type of command, and judges that it is currently performing screen projection through service data, it sends a message for terminating the playback of the multimedia content to the second device. After the second device receives the data, it pauses or terminates the playback. multimedia content. When the first device recognizes that the operation instruction is the second type of instruction, and determines that there is a record of screencasting to the second device within a certain period of time, it sends a message for terminating the playback to the second device (step 409), and the second device receives After the data, pause or stop playing the multimedia content.
在一些可能的实施方案中,当第一设备向第二设备进行镜像投屏的时候,或者第二设备向第一设备进行镜像投屏的时候,第一设备或第二设备判断当前的网络质量,若网络质量优于阈值,以第一分辨率进行镜像投屏,若网络指令低于阈值,以第二分辨率进行投屏,其中第一分辨率的高于第二分辨率。In some possible implementations, when the first device performs mirror projection to the second device, or when the second device performs mirror projection to the first device, the first device or the second device determines the current network quality , if the network quality is better than the threshold, mirror projection is performed at the first resolution, and if the network command is lower than the threshold, projection is performed at the second resolution, where the first resolution is higher than the second resolution.
在一些可能的实施方案中,上述媒体流的迁移为视频流外,还可以为音频流,当迁移的对象为音频时,第一设备可以不需要以悬浮窗的形式显示第一应用的界面/以画中画的形式显示多媒体内容,当第一应用具有后台播放权限时,无需多余操作;当第一应用不具有后台权限时,当第一应用退到后台时,保持第一应用的生命周期。In some possible implementations, in addition to the video stream, the above-mentioned media stream can also be an audio stream. When the object to be migrated is audio, the first device does not need to display the interface/interface of the first application in the form of a floating window. Display multimedia content in the form of picture-in-picture. When the first application has the permission to play in the background, no unnecessary operations are required; when the first application does not have the permission to play in the background, when the first application returns to the background, the life cycle of the first application is maintained .
在一些可能的实施方案中,上述方案中的第一设备和第二设备可以用于音频通话、视频通话的场景。当视频通话的主体为第一设备(手机)和对端设备(视频通话另一端的设备)时,第一设备可以在接收到手势并识别为第一类指令后,将视频通话迁移到第二设备(大屏)上;当视频通话的主体设备为第二设备和对端设备时,第一设备可以在收到手势并识别为第二类指令后,将视频通话迁移到第一设备上。上述方案可以满足当第一设备/第二设备不存在 音频通话软件/视频通话软件的时候,具备视频通话能力,且能够实现音频通话/视频通话主体的迁移。In some possible implementations, the first device and the second device in the above solution may be used in scenarios of audio calls and video calls. When the subjects of the video call are the first device (mobile phone) and the opposite end device (device at the other end of the video call), the first device can migrate the video call to the second device after receiving the gesture and recognizing it as the first type of instruction On the device (large screen); when the main devices of the video call are the second device and the peer device, the first device can transfer the video call to the first device after receiving the gesture and recognizing it as the second type of instruction. The above solution can satisfy that when the first device/second device does not have audio call software/video call software, it has the video call capability, and can realize the migration of the audio call/video call subject.
在一些可能的实施方案中,当第一设备/第二设备和对端设备采用的是VoIP通话时,第一设备在接收到手势,并解析成第一类指令时,可选地可以通过呼叫第二设备的VoIP号码,将第二设备拉入通话会议(session),此时通话session中包括第一设备、第二设备和对端设备,可选地,在确认第二设备在会议session中,第一设备退出该会话session,实现第一设备到第二设备的过渡。在一些可能的实施方案中,第一设备仍然保持在会话session中,并保持静音状态,以能够实现当用户还需要切换通话主体的实施,能够快速响应。在一些可能的实施方案中,第一设备仍然保持在会话session中,并保持静音状态,并且统计驻留在会话session中的时间,当驻留时间大于第一时间阈值的时候,第一设备退出该会话session,实现既能够满足用户快速切换通话主体的同时,也能够在用户不需要使用第一设备时,降低第一设备的功耗。In some possible implementations, when the first device/second device and the peer device are using a VoIP call, when the first device receives the gesture and parses it into the first type of instruction, it can optionally call The VoIP number of the second device is used to pull the second device into a conference call (session). At this time, the call session includes the first device, the second device and the peer device. Optionally, after confirming that the second device is in the conference session , the first device exits the session, and the transition from the first device to the second device is implemented. In some possible implementations, the first device still remains in the session and remains in a mute state, so that when the user still needs to switch the subject of the call, it can respond quickly. In some possible implementations, the first device still remains in the session and in a silent state, and counts the time that it resides in the session, and when the retention time is greater than the first time threshold, the first device exits The session can not only satisfy the user to quickly switch the call subject, but also reduce the power consumption of the first device when the user does not need to use the first device.
在一些可能的实施方案中,当第一设备/第二设备和对端设备采用的是VoIP通话时,第一设备在接收到手势,并解析成第二类指令时,可选地向第二设备发送呼叫请求,该呼叫请求可选地包括VoIP号码、UUID等用于标识第一设备的字段或者信息。第二设备收到呼叫请求后,将第一设备拉入通话会议(session),此时通话session中包括第一设备、第二设备和对端设备,可选地,在确认第一设备在会议session中,第二设备退出该会话session,实现第二设备到第一设备的过渡。In some possible implementations, when the first device/second device and the peer device are using a VoIP call, when the first device receives the gesture and parses it into a second type of instruction, it can optionally send a message to the second device. The device sends a call request, where the call request optionally includes fields or information used to identify the first device, such as a VoIP number, a UUID, and the like. After receiving the call request, the second device pulls the first device into a conference call (session). At this time, the call session includes the first device, the second device and the peer device. Optionally, after confirming that the first device is in the conference In the session, the second device exits the session session to realize the transition from the second device to the first device.
在一些可能的实施方案中,当第一设备/第二设备和对端设备采用的是基于账号的通话时,也可以采用上述方法,本申请对此不再赘述。In some possible implementations, when the first device/second device and the opposite end device use an account-based call, the above method may also be used, which will not be repeated in this application.
在一些可能的实施方案中,第一设备可选地将视频通过投屏的方式投射到第二设备上,此时第一设备的视频输入源仍然为用第一设备的摄像头采集的画面,在这种实施方式下,当第二设备距离用户较远的时候,第一设备距离用户较近的时候,依然能够捕获近距离的用户画像。In some possible implementations, the first device optionally projects the video to the second device by way of screen projection. At this time, the video input source of the first device is still the image captured by the camera of the first device. In this implementation manner, when the second device is far away from the user and the first device is relatively close to the user, a close-range user portrait can still be captured.
在一些可能的实施方案中,第一设备和第二设备间建立视频传输通道,当收到手势并解析为第一类指令时,第二设备可选地开启摄像头,将捕获的画面传输给第一设备,第一设备将接收到的视频流作为采集的画面,传输给对端设备。In some possible implementations, a video transmission channel is established between the first device and the second device, and when the gesture is received and parsed into the first type of instruction, the second device can optionally turn on the camera and transmit the captured image to the first device. A device, the first device transmits the received video stream as a captured image to the peer device.
在一些可能的实施方案中,第一设备和第二设备间建立音频传输通道,当收到手势并解析为第一类指令时,第二设备可选地开启麦克风采集声音,将捕获的声音传输给第一设备,第一设备将接收到的音频流作为采集的音频,传输给对端设备。通过上述方式,用户能够感受到更好的迁移体验,即不需要拿起第二设备即可以完成视频通话。In some possible implementations, an audio transmission channel is established between the first device and the second device, and when the gesture is received and parsed into the first type of instruction, the second device can optionally turn on the microphone to collect sound, and transmit the captured sound. For the first device, the first device transmits the received audio stream as the collected audio to the peer device. In the above manner, the user can experience a better migration experience, that is, the video call can be completed without picking up the second device.
在一些可能的实施方案中,在第一设备和第二设备具有VoIP通话能力或账号通话能力时,当收到手势并解析为第一类指令时,在第一时间采用镜像投屏的方式,当到达第一时间阈值时,采用上述切换通话主体的方式。In some possible implementations, when the first device and the second device have the VoIP call capability or the account call capability, when the gesture is received and parsed into the first type of instruction, the screen mirroring method is used for the first time, When the first time threshold is reached, the above method of switching the calling subject is adopted.
在一些可能的实施方案中,当通话的主体为第二设备和对端设备时,当收到手势并解析为第二类指令时,能够将通话主体从第二设备切换到第一设备,此处不再论述。In some possible implementations, when the subject of the call is the second device and the peer device, when the gesture is received and parsed into a second type of instruction, the subject of the call can be switched from the second device to the first device. no longer discussed.
可以理解的是,上述方法中的视频通话也可以为音频通话,本申请对此不做限定。It can be understood that the video call in the above method may also be an audio call, which is not limited in this application.
在一些可能的实施方案中,本申请的方案还可以用于音频的投射,例如将第一设备上播放的音频(音乐、FM等)投射到音箱上。还可选地通话指向性手势(例如在屏幕上的指向性手势或指向性隔空手势)实现。In some possible implementations, the solution of the present application can also be used for audio projection, for example, projecting the audio (music, FM, etc.) played on the first device to the speaker. It is also optionally implemented with directional gestures for calls (eg, directional gestures on the screen or directional space gestures).
本申请提供一种电子设备,包括存储器和一个或多个处理器;其中,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被处理器执行时,使得电子设备执行投屏的方法。The present application provides an electronic device, including a memory and one or more processors; wherein, the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are executed by the processor, the electronic device is made to perform screen projection. Methods.
本申请提供一种计算机可读存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行投屏的方法。The present application provides a computer-readable storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the electronic device performs a screen projection method.
本申请提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行投屏的方法。The present application provides a computer program product, which when the computer program product runs on a computer, enables the computer to execute a method for screen projection.
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。Those skilled in the art should appreciate that, in one or more of the above examples, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above content is only a specific embodiment of the present application, but the protection scope of the present application is not limited to this. Covered within the scope of protection of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (14)

  1. 一种投屏的方法,应用于第一设备,其特征在于,所述方法包括:A method for screen projection, applied to a first device, characterized in that the method includes:
    获取传感器数据;get sensor data;
    当解析所述传感器数据为第一类指令时,执行下列操作中的至少一种:将显示内容镜像投屏至第二设备,发送业务数据至所述第二设备;When parsing the sensor data as the first type of instruction, perform at least one of the following operations: mirror the display content to the second device, and send service data to the second device;
    当解析所述传感器数据为第二类指令时,执行下列操作中的至少一种:将所述显示内容镜像投屏至所述第一设备;发送所述业务数据至所述第一设备;停止将所述显示内容镜像投屏至所述第二设备,发送控制指令至所述第二设备。When parsing the sensor data into the second type of instruction, perform at least one of the following operations: mirror the display content to the first device; send the service data to the first device; stop The display content is mirrored and projected to the second device, and a control instruction is sent to the second device.
  2. 根据权利要求1所述的方法,其特征在于,所述业务数据包括以下项中的至少一项:多媒体内容的名称,所述多媒体内容的标识、所述多媒体内容的统一资源定位符、所述多媒体内容的播放进度、所述多媒体内容的播放音量和所述多媒体内容的类型。The method according to claim 1, wherein the service data comprises at least one of the following items: the name of the multimedia content, the identifier of the multimedia content, the uniform resource locator of the multimedia content, the The playback progress of the multimedia content, the playback volume of the multimedia content, and the type of the multimedia content.
  3. 根据权利要求1或2所述的方法,其特征在于,所述发送业务数据至第二设备,包括:The method according to claim 1 or 2, wherein the sending the service data to the second device comprises:
    当前台应用为视频播放应用时,调取所述视频播放应用程序的应用程序接口,获取所述业务数据,将所述业务数据发送给所述第二设备,所述第二设备根据所述业务数据接续播放所述多媒体内容。When the foreground application is a video playback application, the application program interface of the video playback application is called, the service data is acquired, and the service data is sent to the second device, and the second device according to the service The data continues to play the multimedia content.
  4. 根据权利要求1或2所述的方法,其特征在于,所述将内容镜像投屏至第二设备,还包括:The method according to claim 1 or 2, wherein the mirroring and projecting the content to the second device further comprises:
    使用Miracast协议将所述显示内容镜像投屏至所述第二设备。The display content is mirrored and projected to the second device using the Miracast protocol.
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,在所述将显示内容镜像投屏至第二设备之后,还包括:The method according to any one of claims 1 to 4, wherein after the mirroring and projecting the display content to the second device, the method further comprises:
    将和所述显示内容关联的前台应用设置为悬浮窗或者画中画的方式进行显示。The foreground application associated with the display content is set to be displayed in a floating window or a picture-in-picture manner.
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述第一类指令的关联手势为从下往上的隔空手势或者三指下滑或者四指下滑。The method according to any one of claims 1 to 5, wherein the associated gesture of the first type of instruction is a bottom-to-top space gesture or a three-finger slide or a four-finger slide.
  7. 根据权利要求1至5中任一项所述的方法,其特征在于,所述第二类指令的关联手势为从上往下的隔空手势或者三指上滑或者四指上滑。The method according to any one of claims 1 to 5, wherein the associated gesture of the second type of instruction is a top-to-bottom space gesture or a three-finger swipe up or a four-finger swipe up.
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述将显示内容镜像投屏至第二设备,包括:The method according to any one of claims 1 to 7, wherein the mirroring and projecting the display content to the second device comprises:
    所述第一设备通过图像分析当前界面,得到投屏控件的位置,通过模拟用户操作,执行应用内置的投屏功能,将所述显示内容镜像投屏至所述第二设备。The first device analyzes the current interface through the image to obtain the position of the screen projection control, executes the built-in screen projection function of the application by simulating a user operation, and mirrors and projects the display content to the second device.
  9. 根据权利要求1至7中任一项所述的方法,其特征在于,所述发送业务数据至所述第二设备,包括:The method according to any one of claims 1 to 7, wherein the sending service data to the second device comprises:
    所述第一设备通过图像分析当前界面,得到投屏控件的位置,通过模拟用户操作,执行应用内置的投屏功能,发送所述业务数据至所述第二设备。The first device analyzes the current interface through the image to obtain the position of the screen projection control, executes the built-in screen projection function of the application by simulating a user operation, and sends the service data to the second device.
  10. 根据权利要求1至9中任一项所述的方法,所述第一设备或所述第二设备存储操作指令的数据库,其特征在于,所述解析所述传感器数据,包括:The method according to any one of claims 1 to 9, wherein the first device or the second device stores a database of operation instructions, wherein the parsing the sensor data comprises:
    将所述传感器数据或经过处理传感器数据后得到的结果,与所述数据库中的数据进行比较,获取所述传感器数据对应于第一类指令或第二类指令。The sensor data or the result obtained after processing the sensor data is compared with the data in the database, and the acquired sensor data corresponds to the first type of instruction or the second type of instruction.
  11. 据权利要求1至10中任一项所述的方法,其特征在于,所述第一设备为手机,所述第二设备为大屏。The method according to any one of claims 1 to 10, wherein the first device is a mobile phone, and the second device is a large screen.
  12. 一种电子设备,其特征在于,所述电子设备包括存储器和一个或多个处理器;其 中,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求1至11中第一设备或第二设备执行的任一项所述的投屏的方法。An electronic device, characterized in that the electronic device includes a memory and one or more processors; wherein the memory is used to store computer program codes, and the computer program codes include computer instructions; when the computer instructions are When executed by the processor, the electronic device is caused to execute the method for screen projection according to any one of the first device or the second device in claims 1 to 11.
  13. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至11中第一设备或第二设备执行的任一项所述的投屏的方法。A computer-readable storage medium, characterized by comprising computer instructions, which, when the computer instructions are executed on an electronic device, cause the electronic device to perform the operations performed by the first device or the second device in claims 1 to 11 Any one of the method for screen projection.
  14. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述电子设备执行如权利要求1至11中第一设备或第二设备执行的任一项所述的投屏的方法。A computer program product, characterized in that, when the computer program product runs on a computer, the electronic device is made to execute any one of the first device or the second device in claims 1 to 11. method of screencasting.
PCT/CN2022/073202 2021-02-08 2022-01-21 Screen projection method and electronic device WO2022166618A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110171010 2021-02-08
CN202110171010.1 2021-02-08
CN202110584296.6A CN114915834A (en) 2021-02-08 2021-05-27 Screen projection method and electronic equipment
CN202110584296.6 2021-05-27

Publications (1)

Publication Number Publication Date
WO2022166618A1 true WO2022166618A1 (en) 2022-08-11

Family

ID=82741936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/073202 WO2022166618A1 (en) 2021-02-08 2022-01-21 Screen projection method and electronic device

Country Status (1)

Country Link
WO (1) WO2022166618A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484484A (en) * 2022-08-30 2022-12-16 深圳市思为软件技术有限公司 Screen projection control method and device for intelligent equipment, electronic equipment and storage medium
CN115802083A (en) * 2022-11-22 2023-03-14 深圳创维-Rgb电子有限公司 Control method, control device, split television and readable storage medium
CN117119615A (en) * 2023-10-25 2023-11-24 吉林藤兴科技有限公司 Screen throwing method for intelligent mobile terminal to control large screen in vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445985A (en) * 2010-11-26 2012-05-09 深圳市同洲电子股份有限公司 Digital television receiving terminal and mobile terminal interaction method, device and system
US20170085960A1 (en) * 2014-05-30 2017-03-23 Tencent Technology (Shenzhen) Company Limited Video-based interaction method, terminal, server and system
CN107659712A (en) * 2017-09-01 2018-02-02 咪咕视讯科技有限公司 A kind of method, apparatus and storage medium for throwing screen
CN111327769A (en) * 2020-02-25 2020-06-23 北京小米移动软件有限公司 Multi-screen interaction method and device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102445985A (en) * 2010-11-26 2012-05-09 深圳市同洲电子股份有限公司 Digital television receiving terminal and mobile terminal interaction method, device and system
US20170085960A1 (en) * 2014-05-30 2017-03-23 Tencent Technology (Shenzhen) Company Limited Video-based interaction method, terminal, server and system
CN107659712A (en) * 2017-09-01 2018-02-02 咪咕视讯科技有限公司 A kind of method, apparatus and storage medium for throwing screen
CN111327769A (en) * 2020-02-25 2020-06-23 北京小米移动软件有限公司 Multi-screen interaction method and device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484484A (en) * 2022-08-30 2022-12-16 深圳市思为软件技术有限公司 Screen projection control method and device for intelligent equipment, electronic equipment and storage medium
CN115484484B (en) * 2022-08-30 2024-05-14 深圳市思为软件技术有限公司 Intelligent device screen-throwing control method and device, electronic device and storage medium
CN115802083A (en) * 2022-11-22 2023-03-14 深圳创维-Rgb电子有限公司 Control method, control device, split television and readable storage medium
CN117119615A (en) * 2023-10-25 2023-11-24 吉林藤兴科技有限公司 Screen throwing method for intelligent mobile terminal to control large screen in vehicle

Similar Documents

Publication Publication Date Title
WO2020098437A1 (en) Method for playing multimedia data and electronic device
WO2021078284A1 (en) Content continuation method and electronic device
WO2020014880A1 (en) Multi-screen interaction method and device
WO2020216156A1 (en) Screen projection method and computing device
WO2022166618A1 (en) Screen projection method and electronic device
WO2021052214A1 (en) Hand gesture interaction method and apparatus, and terminal device
WO2020224449A1 (en) Split-screen display operation method and electronic device
WO2022257977A1 (en) Screen projection method for electronic device, and electronic device
CN112394895B (en) Picture cross-device display method and device and electronic device
WO2022100304A1 (en) Method and apparatus for transferring application content across devices, and electronic device
WO2020228645A1 (en) Method for performing playback of audio and video data, terminal, and device
WO2020173370A1 (en) Method for moving application icons, and electronic device
WO2020143380A1 (en) Data transmission method and electronic device
WO2021185244A1 (en) Device interaction method and electronic device
WO2021258809A1 (en) Data synchronization method, electronic device, and computer readable storage medium
WO2022048474A1 (en) Method for multiple applications to share camera, and electronic device
WO2021047567A1 (en) Callback stream processing method and device
WO2022100610A1 (en) Screen projection method and apparatus, and electronic device and computer-readable storage medium
WO2022052791A1 (en) Method for playing multimedia stream and electronic device
WO2022121775A1 (en) Screen projection method, and device
WO2022007944A1 (en) Device control method, and related apparatus
JP2023528384A (en) CONTENT SHARING METHOD, APPARATUS AND SYSTEM
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
WO2022007678A1 (en) Method for opening file, and device
WO2022028537A1 (en) Device recognition method and related apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22748897

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22748897

Country of ref document: EP

Kind code of ref document: A1