WO2023169237A1 - Procédé de capture d'écran, dispositif électronique, et système - Google Patents

Procédé de capture d'écran, dispositif électronique, et système Download PDF

Info

Publication number
WO2023169237A1
WO2023169237A1 PCT/CN2023/078287 CN2023078287W WO2023169237A1 WO 2023169237 A1 WO2023169237 A1 WO 2023169237A1 CN 2023078287 W CN2023078287 W CN 2023078287W WO 2023169237 A1 WO2023169237 A1 WO 2023169237A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
electronic device
image
projection
screenshot
Prior art date
Application number
PCT/CN2023/078287
Other languages
English (en)
Chinese (zh)
Inventor
颜航天
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023169237A1 publication Critical patent/WO2023169237A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • Embodiments of the present application relate to wireless display technology, and in particular, to a screen capture method, electronic device and system.
  • the mirrored device can display multiple mirror windows to display resources shared by the mirrored device.
  • the existing screenshot function of the screen projection device is inconvenient to operate on the device being projected, and can only capture the content of one projection window; the existing screenshot function of the device being projected cannot capture one or more projection windows. accurate.
  • the computer displays the interface shared by the mobile phone through multiple screen projection windows.
  • the existing screenshot function of the mobile phone is to drop down the selection bar and click the screenshot control.
  • the current screenshot function of the computer is to manually capture the screen projection window through the screenshot tool. Then, when the user manually captures the screen projection window on the computer, the screen projection window cannot be accurately captured, and the screen projection window cannot be accurately captured. It is difficult to capture multiple screen projection windows at the same time.
  • This application provides a screen capture method, electronic device, and system.
  • the screen capture method can respond to user operations, and can conveniently obtain the display content of multiple screen projection windows during the screen projection process, thereby improving the user's experience.
  • embodiments of the present application provide a method for taking screenshots, which is applied to a screen-casting device.
  • the method includes:
  • the screen-projected device establishes a communication connection with the screen-casting device
  • the screen-casting device receives the screen-casting data from the screen-casting device
  • the device being projected Based on the projection data, the device being projected displays the projection image through N projection windows, where N is a positive integer not less than 2;
  • the screen-casting device acquires a screen shot image in response to the user operation, and the screen shot image includes screen shot images displayed by at least two screen cast windows among the N screen cast windows.
  • the screen-casting device when the screen-casting device displays a screen-cast image through multiple screen-casting windows, the screen-casting device may respond to a user operation and obtain a screen shot image, where the screen shot image includes the screen cast image displayed by at least two screen casting windows.
  • the screenshot image may be obtained by the screen-casting device from the screen-casting data, or may be obtained by the screen-casting device after sending an acquisition request to the screen-casting device.
  • the mirroring device sends it to the mirrored device.
  • the device being projected can also display or save the screenshot image.
  • the user can obtain the screen projection images of multiple screen projection windows at the same time through user operations, which avoids the complicated operation of the user needing to pull the screen projection window to take a manual screenshot when intercepting the content of multiple screen projection windows.
  • the user can conveniently Obtaining the display content of multiple screen projection windows can improve the user experience.
  • the screenshot image is an image synthesized from screen projection images displayed by at least two screen projection windows among the N screen projection windows.
  • the screenshot image obtained by the screen projection device may be synthesized from the screen projection images displayed by at least two screen projection windows.
  • the user does not need to manually synthesize the screen projection window or the screen image.
  • the screen device directly obtains the synthesized screenshot image in response to the user's operation, which can avoid synthesis caused by manual operations such as uneven splicing. problem, achieving accurate synthesis of images, bringing convenience to users.
  • obtaining a screenshot includes:
  • the screen-casting device sends a synthesis request to the screen-casting device, where the synthesis request is used to request to generate a screenshot image based on the screen-casting images displayed in at least two screen-casting windows;
  • the device to which the screen is projected receives the screenshot image from the device to which the screen is projected.
  • the step of obtaining the screenshot image may be performed by the screen projection device.
  • the screen projection device generates the screenshot image based on the screen projection images displayed in at least two screen projection windows. Since the projection device is used by the user, its synthesis function is guaranteed to a certain extent, and the device being projected often changes. This method can avoid synthesis problems caused by the limited synthesis function of the device being projected.
  • obtaining a screenshot includes:
  • the device being projected obtains the projection images displayed in at least two projection windows
  • the screen-cast device synthesizes the screen-cast images displayed in at least two screen-casting windows into a screenshot image.
  • the user operation includes a first user operation and a second user operation
  • the screen-casting device responds to the user operation and before acquiring the screenshot image, further includes:
  • the screen-casting device responds to the first user operation and displays the screenshot status of the N screen-casting windows
  • the device being projected displays at least two projection windows as selected
  • the screen capture image includes the screen projection images displayed by at least two screen projection windows when the first user operation is detected.
  • the device being projected can provide the function of selecting a projection window by displaying the screenshot status of N projection windows. Furthermore, the projection image to be obtained can be determined based on the user's selection operation. Through this method, users can select some windows among multiple screen projection windows to take screenshots.
  • the method also includes:
  • the screen-casting device determines an arrangement rule of the screen-casting images displayed by the at least two screen-casting windows in the screenshot image based on the time when the second user operation for the at least two screen-casting windows is detected respectively.
  • the device being projected can determine the position of the projected image in the screenshot image based on the user's selection operation of the projection window, which provides a convenient synthesis method for the user.
  • the method also includes:
  • the device being projected adjusts the display of the N projection windows based on preset rules.
  • the preset rules include the preset positions and preset sizes of the N projection windows.
  • the device being projected can respond to the user operation of taking a screenshot by arranging all windows on the display screen of the device being projected, so as to avoid obstruction between windows and facilitate the user to clearly see the projection window. display content.
  • the method further includes:
  • the screencast device displays the screenshot image.
  • the screen-casting device may display the screenshot image for the user to preview after acquiring the screenshot image.
  • the screenshot image includes an image obtained by splicing the screen projection images displayed in N screen projection windows. After the screen projection device displays the screen capture image, the method further includes:
  • the screen-cast device responds to a deletion operation input for at least one screen-cast image in the screen-capture image, and deletes at least one screen-cast image from the screen-capture image;
  • the screen-casting device displays the deleted screenshot image.
  • the device being projected can generate the projected images of all the projected windows into screenshot images, display the screenshot images, and then determine the projected screen image to be obtained based on the user's selection operation from the projected images. This method provides the user with the ability to select screenshot content.
  • the method further includes:
  • the device being projected displays the save control for the screenshot image
  • the screen-casting device responds to the user operation input on the save control and sends a save request to the screen-casting device.
  • the save request is used to request the screen-casting device to save the screenshot image.
  • the user can save the screenshot image to the screen projection device through user operations on the screen projection device.
  • the user does not need to operate the screen projection device, and can control the screen projection device only through user operations on the screen projection device.
  • the method further includes:
  • the screen-casting device displays editing controls for the screenshot image
  • the screen-casting device processes the screenshot image in response to the user operation input on the editing control; or, the screen-casting device responds to the user operation input on the editing control and sends an editing request to the screen-casting device, and the editing request is used to request editing. Screenshot the image for processing.
  • the method also includes:
  • the screen-cast device Before acquiring the screenshot image, the screen-cast device identifies the contents of the screen-cast images of at least two screen-casting windows;
  • the device to be projected displays a prompt message when it recognizes that the projected images of at least two projection windows contain content that does not meet the preset requirements.
  • the device being projected can identify the displayed content and display prompt information for the projection window that does not meet the preset requirements.
  • the prompt message can be to prohibit screenshots or ask the user to confirm again whether to take screenshots, thereby ensuring Security of sensitive information such as passwords for payment pages or personal information for login pages.
  • embodiments of the present application provide a screenshot method, which method includes:
  • the projection device establishes a communication connection with the device being projected
  • the projection device sends projection data to the projected device.
  • the projection data is used by the projected device to display the projection image through N projection windows, where N is a positive integer not less than 2;
  • the screen projection device receives the synthesis request sent by the screen projection device
  • the screen projection device responds to the synthesis request and determines the screenshot image based on the screen projection images displayed by at least two of the N screen projection windows;
  • the screen casting device sends a screenshot image to the screen being projected device.
  • the synthesis request is sent to the screen casting device in response to a user operation
  • the screenshot image includes at least two screen casting windows when the screen casting device detects the user operation. Displayed screen image Like the synthesized image.
  • the synthesis request includes indication information indicating the arrangement rules of the screen projection images displayed by at least two screen projection windows in the screenshot image, and the method further includes:
  • the screen-casting device generates screenshot images based on the arrangement rules.
  • the method also includes:
  • the casting device receives the save request from the device being projected
  • the screen casting device responds to the save request and saves the screenshot image.
  • the method also includes:
  • the screen projection device receives the editing request sent by the screen projection device
  • the screen projection device processes the screenshot image in response to the editing request.
  • embodiments of the present application provide an electronic device, including one or more functional modules.
  • the one or more functional modules can be used to perform the screenshot method in any of the possible implementations of any of the above aspects. .
  • the present application provides a computer storage medium that includes computer instructions.
  • the communication device causes the communication device to perform the screenshot method in any of the possible implementations of any of the above aspects.
  • this application provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to execute the screen capture method in any of the possible implementations of any of the above aspects.
  • the present application provides a chip, including: a processor and an interface.
  • the processor and the interface cooperate with each other so that the chip executes the screenshot method in any of the possible implementations of any of the above aspects.
  • the electronic device provided by the third aspect, the computer-readable storage medium provided by the fourth aspect, the computer program product provided by the fifth aspect, and the chip provided by the sixth aspect are all used to execute the tasks provided by the embodiments of the present application. method. Therefore, the beneficial effects it can achieve can be referred to the beneficial effects in the corresponding methods, and will not be described again here.
  • Figure 1 is a schematic diagram of a first electronic device and a second electronic device displaying image resources during a screen projection process provided by an embodiment of the present application;
  • Figure 2 is a schematic diagram of the existing screen casting technology and the screen capture technology of the screen casting process provided by the embodiment of the present application;
  • Figure 3 is a schematic diagram of a screenshot system provided by an embodiment of the present application.
  • Figure 4A is a schematic diagram of the hardware structure of an electronic device 100 provided by an embodiment of the present application.
  • Figure 4B is a software structure block diagram of a first electronic device 101 provided by an embodiment of the present application.
  • Figure 4C is a schematic diagram of the hardware structure of a second electronic device 102 provided by an embodiment of the present application.
  • Figure 5 is a schematic flowchart of a screenshot method provided by an embodiment of the present application.
  • Figure 6 is a schematic flowchart of another screenshot method provided by an embodiment of the present application.
  • Figure 7 is a schematic flowchart of another screenshot method provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of a screen projection window provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of a screenshot state of a screen projection window provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of adjusting the screen projection window provided by an embodiment of the present application.
  • Figure 11 is a schematic diagram of determining a target screen projection window provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of a screenshot provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of a preview window provided by an embodiment of the present application.
  • 14A to 14D exemplarily illustrate the user interface during the screenshot process of the first electronic device.
  • first and second are used for descriptive purposes only and shall not be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of this application, unless otherwise specified, “plurality” The meaning is two or more.
  • GUI graphical user interface
  • the screencasting technology includes heterogeneous screencasting technology and homologous screencasting technology.
  • the electronic device that sends screencast content to other devices is called a first electronic device, and the device that receives and displays screencast content is called a second electronic device.
  • the first electronic device is also called a screen casting device, and the second electronic device is also called a screen projected device.
  • Figure 1 exemplarily shows a schematic diagram of the first electronic device and the second electronic device displaying image resources during the screen casting process.
  • the second electronic device can display the screencast content sent by the first electronic device through a screencast window.
  • the first electronic device can project its display screen to the second electronic device, and there will be a screen projection window on the second electronic device to display the screen projection content of the first electronic device.
  • the projection image of the projection window on the second electronic device will also change accordingly.
  • the picture on the first electronic device will also change and refresh synchronously with the projection window of the second electronic device. This is homologous projection.
  • homologous screencasting means that the displayed content of the screencasting window on the screen-casting device (i.e., the second electronic device) is always consistent with the screen content displayed on the screen-casting device (i.e., the first electronic device). It changes synchronously.
  • the display content of the first electronic device and the display content of the screen projection window on the second electronic device are inconsistent and independent of each other.
  • the display content will not change. Triggering the refresh of the content of the screen projection window on the second electronic device.
  • the content of the screen projection window on the second electronic device is triggered to be refreshed, the content displayed on the first electronic device will not change. This is homologous screen projection.
  • the second electronic device can display the information sent by the first electronic device through multiple screen projection windows at the same time.
  • Screen casting content wherein the screen casting window may include a screen casting window for heterogeneous screen casting and a homologous screen casting window.
  • the screen projection window of different sources can be called the main window, and the screen projection window of the same source can be called the sub-window. It should be noted that, under normal circumstances, the screen projection window only includes one main window, and the number of sub-windows is not limited.
  • three screen projection windows are displayed on the second electronic device.
  • the content of one screen projection window is the same as the content displayed on the first electronic device, and the screen projection window is the main window;
  • the other two screen projection windows are different from the content displayed on the first electronic device, and are respectively the first sub-window and the second sub-window.
  • the second electronic device displays the screen projection content sent by the first electronic device through multiple screen projection windows, which can be called multi-screen collaboration.
  • the above-mentioned multiple screen projection windows ie, the main window and the sub-window
  • are both Can be called a collaborative window.
  • the inventor of this application discovered that when the display content of the screen projection window is intercepted on the device being projected, it can be achieved through the existing screenshot functions of the device and the device being projected.
  • problems such as inaccurate screenshot content and inconvenient user operation. , the user’s operating experience is poor.
  • Figure 2 exemplarily shows a method of obtaining screencast content through the existing screen capture function of the screencasting device.
  • the user can double-click the screen of the screen casting device with his knuckles to capture the content of the main window.
  • this method can only capture the screencast content of the main window.
  • the user can pull down the control center interface with the mouse on the device being projected, and then click the screenshot control with the mouse to capture the screencast window as shown in (C) in Figure 2. content.
  • this method is implemented by a screen projection device, it is difficult for the user to operate it through the mouse, and the display content of one screen projection window can only be obtained at a time.
  • the existing screenshot function of the device being projected is not accurate in capturing one or more projection windows.
  • the screen projection window cannot be captured accurately, and it is difficult to capture multiple screen projection windows at the same time.
  • the user needs to manually adjust the order and alignment of the screen projection windows. It is difficult for the user to control the gap between the screen projection windows and to control the size of the screenshot image at the pixel level, that is, it is difficult for the user to control the size of the screenshot image at the pixel level. Manually controlling the width or height of the screenshot image to a few pixels is very time-consuming.
  • the second electronic device can obtain the display content of multiple screen projection windows based on user operations, and convert the display content of the multiple screen projection windows into Synthesize into a screenshot image.
  • the second electronic device can obtain the display content of multiple screen projection windows based on user operations, and convert the display content of the multiple screen projection windows into Synthesize into a screenshot image.
  • the user can accurately obtain the display content of multiple screen projection windows through the second electronic device, making the user's screenshot of the screen projection window faster, more accurate and more convenient during the screen projection process.
  • screencasting refers to a direct point-to-point point-to-point transmission of a first electronic device (such as a mobile phone, a tablet, etc.) to a second electronic device (such as a TV, a smart screen, etc.) Screen projection technology in which an electronic device transmits image resources and a second electronic device displays the image resources.
  • the first electronic device may also be called the sending end/source end (source end), and the second electronic device may also be called the receiving end (sink end).
  • the communication connection established between the first electronic device and the second electronic device may include, but is not limited to: wireless fidelity direct (Wi-Fi direct) (also known as wireless fidelity peer-to- peer, Wi-Fi P2P) communication connection, Bluetooth communication connection, near field communication (near field communication, NFC) connection or wired connection, etc.
  • Wi-Fi direct wireless fidelity direct
  • Wi-Fi P2P wireless fidelity peer-to- peer
  • Bluetooth communication connection Bluetooth communication connection
  • near field communication near field communication
  • NFC near field communication
  • screen projection technology is just a word used in this embodiment, and its meaning has been recorded in this embodiment, and its name does not constitute any limitation on this embodiment.
  • the screen projection technology may also be called other terms such as multi-screen interaction, full-share screen projection, wireless display, etc.
  • the image resources shared between the first electronic device and the second electronic device may include any one or a combination of the following: video, text, pictures, photos, audio, tables, etc.
  • an image resource can It’s movies, TV series, short videos, musicals, etc.
  • the image resources shared between the first electronic device and the second electronic device may be network image resources, local image resources, or a combination of network image resources and local image resources.
  • the network image resources refer to image resources obtained by the first electronic device from the network, such as videos obtained from a server that provides video services when the first electronic device runs a video application.
  • Local image resources refer to image resources stored or generated locally by the first electronic device, such as pictures or tables stored locally by the first electronic device.
  • the screen capture system 10 may include a first electronic device 101 and a second electronic device 102 .
  • the first electronic device 101 can establish a wireless connection with the second electronic device 102 through a wireless communication method (for example, wireless fidelity (Wi-Fi), Bluetooth, etc.).
  • the first electronic device 101 can transmit file data to the second electronic device 102 through a wireless connection, or the first electronic device 101 can project an application interface to the second electronic device 102 for display, etc.
  • the Real Time Streaming Protocol can be used to control the transmission of real-time data.
  • RTSP is a multimedia streaming protocol used to control sound or video, and allows simultaneous control of multiple streaming requirements.
  • the first electronic device 101 can control the transmission of the data stream through RTSP.
  • the first electronic device 101 can encode the compressed H.264 format video or advanced audio coding (AAC)
  • AAC advanced audio coding
  • the audio in the format is mixed into a transport stream (TS) file, and the TS file is sent to the second electronic device 102 through Wi-Fi P2P using the RTSP protocol.
  • the second electronic device 102 receives data from the first electronic device through the RTSP protocol.
  • H.264 is a video codec protocol
  • ACC is an audio codec protocol.
  • the first electronic device 101 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), Netbooks, as well as cellular phones, personal digital assistants (PDAs), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, wearables devices, in-vehicle devices, smart home devices and/or smart city devices, to name a few.
  • PDAs personal digital assistants
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • wearables devices wearables devices
  • in-vehicle devices smart home devices and/or smart city devices
  • smart city devices to name a few.
  • the second electronic device 102 may be a tablet, a monitor, a television, a desktop computer, a laptop computer, a handheld computer, a notebook computer, a super mobile personal computer, a netbook, an augmented reality device, a virtual reality device, an artificial intelligence device, or a vehicle-mounted device. , smart home devices, etc.
  • the first electronic device 101 and the second electronic device 102 can refer to the relevant descriptions of subsequent embodiments and will not be described again here.
  • the screenshot system 10 may also include: a Wi-Fi access point 103 and a server 104.
  • the first electronic device 101 can access the Wi-Fi access point 103.
  • the server 104 can provide network audio and video services.
  • the server 104 may be a server that stores a variety of image resources, such as a Huawei video server that provides audio and video services.
  • the number of servers 104 may be one or multiple.
  • the image resources may be transmitted to the second electronic device 102 through the Wi-Fi access point 103 when the first electronic device 101 runs a video application. Internet video obtained by server 104.
  • FIG. 4A shows a schematic diagram of the hardware structure of the electronic device 100.
  • electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different component configuration.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194 and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can separately couple the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 can be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 can be coupled with the audio module 170 through the I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface to implement the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through the CSI interface to implement the shooting function of the electronic device 100 .
  • the processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, display screen 194, wireless communication module 160, audio module 170, sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the SIM interface can be used to communicate with the SIM card interface 195 to implement the function of transmitting data to the SIM card or reading data in the SIM card.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, etc.
  • the wireless communication function of the electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be Used as a diversity antenna for wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD) or organic light-emitting diode (OLED).
  • Active matrix organic light emitting diode or active matrix organic light emitting diode active-matrix organic light emitting diode, AMOLED
  • flexible light-emitting diode flexible light-emitting diode, FLED
  • Miniled MicroLed, Micro-oLed, quantum Quantum dot light emitting diodes (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, etc. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store the operating system, at least one application required for the function (such as face recognition function, fingerprint recognition function, mobile payment function, etc.).
  • the storage data area can store data created during the use of the electronic device 100 (such as face information template data, fingerprint information templates, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into Switch to digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the headphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA Cellular Telecommunications Industry Association of the USA
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • pressure sensor 180A may be disposed on display screen 194 .
  • pressure sensors 180A there are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the electronic device 100 determines the intensity of the pressure based on the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the angle at which the electronic device 100 shakes, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • Air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • Magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may utilize the magnetic sensor 180D to detect opening and closing of the flip holster.
  • the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. Then, based on the detected opening and closing status of the leather case or the opening and closing status of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices and be used in horizontal and vertical screen switching, pedometer and other applications.
  • Distance sensor 180F for measuring distance.
  • Electronic device 100 can measure distance via infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may utilize the distance sensor 180F to measure distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outwardly through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect when the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touching.
  • Fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to achieve fingerprint unlocking, access to application locks, fingerprint photography, fingerprint answering of incoming calls, etc.
  • Temperature sensor 180J is used to detect temperature.
  • the electronic device 100 utilizes the temperature detected by the temperature sensor 180J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 reduces the performance of a processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the low temperature from causing the electronic device 100 to shut down abnormally. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a location different from that of the display screen 194 .
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • the motor 191 can also respond to different vibration feedback effects for touch operations in different areas of the display screen 194 .
  • Different application scenarios such as time reminders, receiving information, alarm clocks, games, etc.
  • the touch vibration feedback effect can also be customized.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to synthesize requests, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The multiple cards can be of the same type, or they can be different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device 100 can execute the screenshot method through the processor 110 .
  • FIG. 4B is a software structure block diagram of the first electronic device 101 provided by the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, and screen projection management.
  • users can communicate with other devices through screen projection management, and then project image resources on the device interface that establishes the communication connection.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a display manager, a sensor manager, a cross-device connection manager, an event manager, an activity manager, a window manager, and a content provider. , view system, resource manager, notification manager, etc.
  • the display manager is used for system display management and is responsible for the management of all display-related transactions, including creation, destruction, orientation switching, size and status changes, etc.
  • display-related transactions including creation, destruction, orientation switching, size and status changes, etc.
  • main display module there will only be one default display module on a single device, which is the main display module.
  • the first electronic device 101 can create multiple virtual display modules.
  • Heterogeneous screen projection means creating a virtual display module to carry the display of the application and be used for screen projection.
  • the first electronic device can display multiple screen projection windows through the display manager, and adjust the screen projection windows and their display contents.
  • the specific adjustment process please refer to the relevant content below.
  • the sensor manager is responsible for managing the status of sensors, managing applications to monitor sensor events, and reporting events to applications in real time.
  • the cross-device connection manager is used to establish a communication connection with the second electronic device 102 and send image resources to the second electronic device 102 based on the communication connection.
  • the event manager is used for the event management service of the system. It is responsible for receiving the events uploaded by the underlying layer and distributing them to each window, and completing the reception and distribution of events.
  • the task manager is used for the management of task (Activity) components, including startup management, life cycle management, task direction management, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the window manager is also responsible for window display management, including window display mode, display size, display coordinate position, display level and other related management.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • the view system can be used to construct Build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL) and event data, etc.
  • surface manager surface manager
  • media libraries Media Libraries
  • 3D graphics processing libraries for example: OpenGL ES
  • 2D graphics engines for example: SGL
  • event data etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the system library is specifically used for data management in cross-device scenarios.
  • the first electronic device 101 sends the layer data, audio and video data in the system library to the second electronic device 102, and the second electronic device 102 sends the event data to the first electronic device 101 to complete the event process.
  • mutual transmission is also carried out.
  • the second electronic device 102 transmits the sensor data status to the first electronic device 101 in real time. Only then can the display of the virtual display module in the first electronic device 101 be correctly based on the sensor data of the second electronic device 102. status to control the display.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the kernel layer provides capabilities such as device discovery, device authentication, device connection, etc., and reports events such as device discovery and departure upwards.
  • the embodiment of this application uses the electronic device 200 as an example to introduce the hardware structure and software structure of the second electronic device 102 .
  • the second electronic device 102 may include: a processor 222 , a memory 223 , a wireless communication module 224 , a power switch 225 , a display screen 229 , and an audio module 230 .
  • the second electronic device 102 may further include a wired LAN communication processing module 226, a high definition multimedia interface interface, HDMI) communication processing module 227, USB communication processing module 228, etc.
  • Processor 222 may be used to read and execute computer-readable instructions.
  • the processor 222 may mainly include a controller, arithmetic unit, and a register.
  • the controller is mainly responsible for decoding instructions and issuing control signals for operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations. It can also perform address operations and conversions.
  • Registers are mainly responsible for storing register operands and intermediate operation results temporarily stored during instruction execution.
  • the hardware architecture of the processor 222 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture or an NP architecture, etc.
  • ASIC application specific integrated circuit
  • the processor 222 may be used to parse the signal received by the wireless communication module 224, such as a new URL sent by the first electronic device 101, and obtain multiple videos and associated videos in the playlist according to the new URL. .
  • Wireless communication module 224 may include a WLAN communication processing module.
  • the wireless communication module 224 may also include a Bluetooth (BT) communication processing module, an NFC processing module, a cellular mobile communication processing module (not shown), and the like.
  • BT Bluetooth
  • NFC NFC
  • cellular mobile communication processing module not shown
  • the wireless communication module 224 can be used to establish a communication connection with the first electronic device 101 .
  • the communication connections established by the wireless communication module 224 and the first electronic device 101 may be of various types.
  • the WLAN communication processing module can be used to establish a Wi-Fi direct communication connection with the first electronic device 101
  • the Bluetooth (BT) communication processing module can be used to establish a Bluetooth communication connection with the first electronic device 101
  • the NFC processing module can be used to establish a Bluetooth communication connection with the first electronic device 101.
  • An electronic device 101 establishes an NFC connection and so on.
  • the wireless communication module 224 can also be used to establish a communication connection with the first electronic device 101, and receive the video stream sent by the first electronic device 101 based on the communication connection.
  • the communication connection established between the wireless communication module 224 and the first electronic device 101 can transmit data based on the HTTP protocol. This application does not impose any restrictions on the type of communication connection and data transmission protocol between devices.
  • Memory 223 is coupled to processor 222 for storing various software programs and/or sets of instructions.
  • the memory 223 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 223 can store operating systems, such as uCOS, VxWorks, RTLinux and other embedded operating systems.
  • the memory 223 may also store a communication program that may be used to communicate with the first electronic device 101, one or more servers, or additional devices.
  • the power switch 225 may be used to control power supply to the second electronic device 102 .
  • the wired LAN communication processing module 226 can be used to communicate with other devices in the same LAN through the wired LAN, and can also be used to connect to the WAN through the wired LAN and communicate with devices in the WAN.
  • the HDMI communication processing module 227 may be used to communicate with other devices through an HDMI interface (not shown).
  • USB communication processing module 228 may be used to communicate with other devices through a USB interface (not shown).
  • the display screen 229 can be used to project screen pages, videos, etc.
  • the display screen 229 can be an LCD, OLED, AMOLED, FLED, QLED or other display screen.
  • For the content displayed on the display screen 229 reference may be made to the relevant descriptions of subsequent method embodiments.
  • the display screen 229 can realize continuous playback of multiple videos based on the wireless communication module 224 receiving the playlist and associated video and other video streams of multiple videos sent by the server.
  • Audio module 230 can be used to output audio signals through the audio output interface, so that the second electronic device 102 supports audio playback.
  • the audio module 230 may also be used to receive audio data through the audio input interface.
  • the audio module 230 may include, but is not limited to, a microphone, a speaker, a receiver, and the like.
  • the second electronic device 102 may also include a serial interface such as an RS-232 interface.
  • This serial interface can be connected to other devices, such as speakers and other audio external devices, so that the display and audio external devices can cooperate to play audio and video.
  • the structure illustrated in FIG. 3 does not constitute a specific limitation on the second electronic device 102.
  • the second electronic device 102 may include more or less components than shown in the figures, or combine some components, or split some components, or arrange different components.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the second electronic device 102 may include the hardware included in the first electronic device 101 shown in FIG. 1 described above.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the software system of the second electronic device 102 may adopt a layered architecture, event-driven architecture, microkernel architecture, microservice architecture, or cloud architecture, etc.
  • the software system of the second electronic device 102 may include but is not limited to Linux or other operating systems. For Huawei’s Hongmeng system.
  • the second electronic device 102 is an Android system, which is divided into four layers. From top to bottom, they are the application layer, the application framework layer, the Android runtime and system libraries, and the kernel layer.
  • the application layer can include screen projection management applications for device connection and screen display
  • the application framework layer can include cross-device connection managers, event managers, window managers and display managers, etc.
  • system libraries can include Media library and event data, etc.
  • the kernel layer is used for device discovery, device authentication and device connection, etc. The details of each part can be found in the relevant descriptions in Figure 4B, which will not be described again here.
  • the present invention will be described in detail below based on the schematic diagram of the screenshot system 10 shown in FIG. 3 , the schematic diagram of the hardware and software structure of the first electronic device shown in FIGS. 4A and 4B , and the schematic diagram of the hardware structure of the second electronic device shown in FIG. 4C .
  • the screenshot method provided by the application embodiment.
  • FIG. 5 exemplarily shows the flow of a screenshot method provided by an embodiment of the present application.
  • This screenshot method can include some or all of the following steps:
  • the first electronic device establishes a communication connection with the second electronic device.
  • the communication connection established between the first electronic device and the second electronic device may include but is not limited to: Wi-Fi P2P communication connection, Bluetooth communication connection, NFC connection, etc.
  • the first electronic device sends screen projection data to the second electronic device based on the communication connection.
  • the screen projection data supports display on N screen projection windows, where N is a positive integer not less than 2.
  • multiple screen projection windows may include a main window and n sub-windows, where n is a positive integer.
  • n is a positive integer.
  • the second electronic device displays the screen projection images through N screen projection windows based on the screen projection data.
  • the second electronic device can generate multiple screen projection windows based on the screen projection data, and display the screen projection content on the multiple screen projection windows.
  • the projection content displayed on multiple projection windows may be the same or different, and is not limited here. For specific implementation, please refer to the relevant content in step S703, which will not be described again here.
  • the second electronic device sends an acquisition request to the first electronic device.
  • the acquisition request is used to request a screenshot image.
  • the screenshot image includes the screen projection images displayed by at least two screen projection windows.
  • the second electronic device may send the acquisition request to the first electronic device based on the above communication connection in response to the user operation.
  • the acquisition request may include instruction information for instructing the screen projection image; the screen projection image may be the screen projection image on all screen projection windows currently displayed by the second electronic device.
  • the second electronic device can display the screen capture control; furthermore, the second electronic device can respond to the action Based on user operation of the screenshot control, an acquisition request is generated, and the acquisition request is sent to the first electronic device based on the above communication connection.
  • the second electronic device can also receive the user operation through other methods, which is not limited here.
  • the second electronic device may send an acquisition request to the first electronic device in response to a preset gesture acting on the touch pad or touch screen; for another example, the second electronic device may respond to voice arousal or knuckle tapping of the first electronic device.
  • the screen of the second electronic device sends an acquisition request to the first electronic device; for another example, the second electronic device can receive a user operation on a preset shortcut key through a keyboard and send an acquisition request to the first electronic device.
  • the user operation may also be a user operation acting on the first electronic device.
  • the mobile phone can send a trigger message to the second electronic device when it receives a voice call or a double-click operation.
  • the trigger message is used to instruct the display of the screenshot status corresponding to each screen projection window;
  • the second electronic device sends an acquisition request to the first electronic device.
  • the first electronic device synthesizes the screen projection image to obtain a screenshot image.
  • the first electronic device may acquire the screen projection image based on the instruction information used to indicate the screen projection image in the acquisition request; and then, according to the synthesis rules, the projection screen image Perform synthesis to obtain screenshots.
  • the synthesis rules may include the size of the screenshot image, the size and position of the screencast image in the screenshot image, etc.; the synthesis rules may be preset by the first electronic device, or may be carried in the acquisition request, and are not limited here. .
  • the specific implementation of synthesis please refer to the relevant content of step S708, which will not be described again here.
  • the first electronic device sends the screenshot image to the second electronic device.
  • the screenshot image may be a screen projection image displayed by at least two screen projection windows; it may also be an image synthesized from the screen projection images displayed by at least two screen projection windows. That is to say, after receiving the acquisition request, the first electronic device can directly send the screen projection images displayed by at least two screen projection windows to the second electronic device, or it can also send the screen projection images displayed by the at least two screen projection windows.
  • the synthesized screenshot image is sent to the second electronic device.
  • the first electronic device may send the screenshot image to the second electronic device based on the above-mentioned communication connection.
  • the second electronic device displays the screenshot image.
  • the second electronic device may display the screenshot image through the preview window after receiving the screenshot image from the first electronic device.
  • the second electronic device may also respond to the user operation by deleting at least one screen projection image from the screen capture image and displaying the deleted screen capture image.
  • the screenshot image consists of at least two screen projection images.
  • the second electronic device displays a screenshot image, it can also display a cancel control in the upper right corner of each screen image that constitutes the screenshot image; furthermore, the second electronic device responds to the user's operation on the cancel control, Delete the screencast image corresponding to the cancellation control from the screenshot image, and then display the deleted screenshot image.
  • the second electronic device can also receive the user's operation of selecting the screen projection image in other forms, which is not limited here.
  • the second electronic device after displaying the screenshot image, can also display a selection control, which is used by the user to choose to modify the screenshot image through the functions of the first electronic device or the second electronic device; further, In response to a user operation on the selection control, the second electronic device displays an interface corresponding to the function of the first electronic device or the second electronic device, and the interface is used for the user to modify the screenshot image.
  • the second electronic device can also select the function of the first electronic device or the second electronic device through other methods, such as voice awakening, etc., which is not limited here.
  • the second electronic device after the second electronic device displays the screenshot image, in response to the user operation on the screenshot image, the second electronic device can perform processing such as saving, sending or editing the screenshot image through the second electronic device; the second electronic device
  • the screen shot image may also be processed by invoking the function of the first electronic device.
  • steps S504 and S507 are optional steps. That is to say, after receiving the acquisition request of the second electronic device, the first electronic device can directly cast the screen displayed by at least two screen projection windows. The image is sent to the second electronic device as a screenshot image, and the second electronic device may not display the screenshot image after receiving the screenshot image. Optionally, the second electronic device can directly save the screenshot image.
  • FIG. 6 schematically shows the flow of another screenshot method provided by an embodiment of the present application.
  • This screenshot method can include some or all of the following steps:
  • the first electronic device establishes a communication connection with the second electronic device.
  • the communication connection established between the first electronic device and the second electronic device may include but is not limited to: Wi-Fi P2P communication connection, Bluetooth communication connection, NFC connection, etc.
  • the first electronic device sends screen projection data to the second electronic device based on the communication connection, and the screen projection data supports display on at least two screen projection windows.
  • At least two screen projection windows may include a main window and n sub-windows, where n is a positive integer.
  • n is a positive integer.
  • the second electronic device displays the screen projection images through at least two screen projection windows based on the screen projection data.
  • the second electronic device can generate multiple screen projection windows based on the screen projection data, and display the screen projection content on the multiple screen projection windows.
  • the projection content displayed on multiple projection windows may be the same or different, and is not limited here.
  • the second electronic device synthesizes the screen projection image to obtain a screenshot image.
  • the second electronic device in response to user operation, can synthesize the screen projection image displayed on the current screen projection window according to the synthesis rules to obtain a screenshot image.
  • the synthesis rules may include the size of the screenshot image and the size and position of the screen projection image in the screenshot image.
  • the second electronic device can display a screen capture control; furthermore, the second electronic device can respond to a user operation on the screen capture control and synthesize the screen projection image currently displayed in the screen projection window to obtain a screen capture image.
  • the second electronic device can also receive the user operation through other methods, which is not limited here.
  • the second electronic device may synthesize the screen projection image into a screenshot image in response to a preset gesture acting on the trackpad or touch screen; for another example, the second electronic device may respond to voice arousal or knuckle tapping of the third
  • the screen of the second electronic device synthesizes the screen projection image into a screenshot image; for another example, the second electronic device or device can receive user operations for preset shortcut keys through the keyboard and synthesize the screen projection image into an image.
  • the user operation may also be a user operation acting on the first electronic device.
  • the mobile phone can send a trigger message to the second electronic device when it receives a voice call or a double-click operation.
  • the trigger message is used to instruct the display of the screenshot status corresponding to each screen projection window;
  • the second electronic device synthesizes the screen projection image to obtain a screenshot image.
  • the second electronic device displays the screenshot image.
  • the second electronic device can display the screenshot image through the preview window.
  • the second electronic device may also respond to the user operation by converting the screenshot image to At least one of the screencast images is deleted from the screenshot, and the deleted screenshot is displayed.
  • the screenshot image consists of at least two screen projection images.
  • the second electronic device displays a screenshot image, it can also display a cancel control in the upper right corner of each screen image that constitutes the screenshot image; furthermore, the second electronic device responds to the user's operation on the cancel control, Delete the screencast image corresponding to the cancellation control from the screenshot image, and then display the deleted screenshot image.
  • the second electronic device can also receive the user's operation of selecting the screen projection image in other forms, which is not limited here.
  • the second electronic device can also respond to the user operation on the screenshot image and perform processing such as saving, sending or editing the screenshot image; it can also call the function of the first electronic device to implement Processing of screenshots.
  • the second electronic device may respond to the user operation by sending a save request to the first electronic device, where the save request includes a screenshot image, so that the first electronic device saves the screenshot image after receiving the save request.
  • the specific implementation of processing the screenshot image please refer to the relevant content of step S710 in the embodiment below, which will not be described again here.
  • the above-mentioned step S604 may be: the second electronic device responds to the user operation and uses the screen projection image as a screenshot image. That is to say, in response to user operation, the second electronic device can save or display the screen projection images displayed in at least two screen projection windows as screenshot images.
  • the screencast image used to synthesize the screenshot image may be a target image selected by the user. That is to say, when performing step S604, the second electronic device may first select a target image based on the user's operation of selecting the target screen projection window; and then synthesize the target images selected by the user to obtain a screenshot image.
  • the relevant content of step S704 to step S706 in the embodiment below please refer to the relevant content of step S704 to step S706 in the embodiment below, which will not be described again here.
  • FIG. 7 schematically shows the flow of yet another screenshot method provided by an embodiment of the present application.
  • This screenshot method can include some or all of the following steps:
  • the first electronic device establishes a communication connection with the second electronic device.
  • the communication connection established between the first electronic device and the second electronic device may include but is not limited to: Wi-Fi P2P communication connection, Bluetooth communication connection, NFC connection, etc.
  • the first electronic device sends screen projection data to the second electronic device based on the communication connection, and the screen projection data supports display on at least two screen projection windows.
  • At least two screen projection windows may include a main window and n sub-windows, where n is a positive integer.
  • n is a positive integer.
  • the second electronic device displays the screen projection images through at least two screen projection windows based on the screen projection data.
  • the second electronic device can generate multiple screen projection windows based on the screen projection data, and display the screen projection content on the multiple screen projection windows.
  • the projection content displayed on multiple projection windows may be the same or different, and is not limited here.
  • the first electronic device can open a main window based on a user operation, and display the screen projection content on the main window; and then, in response to another user operation by the user, open a sub-window, and use the sub-window and the main window to Display the cast content.
  • the projection data corresponding to the projection content displayed in the main window and the sub-window may be sent by the first electronic device to the second electronic device at the same time, or may be sent by the first electronic device to the second electronic device through multiple data transmissions.
  • Electronic equipment is not limited here.
  • the second electronic device may receive screen projection data from the first electronic device.
  • the screen projection data may include multiple sub-data, such as first screen projection data, second screen projection data and The third screen projection data; furthermore, the second The electronic device can create a main window based on the first screen projection data, generate a screen projection image corresponding to the main window, and then display the screen projection image corresponding to the main window on the main window; it can create a first sub-window based on the second screen projection data, Generate a screen projection image corresponding to the first sub-window, and then display the screen projection image corresponding to the first sub-window on the first sub-window; create a second sub-window based on the third screen projection data, and generate a screen projection image corresponding to the second sub-window.
  • the screen projection image is then displayed on the second sub-window with its corresponding screen projection image.
  • the second electronic device responds to the first user operation and displays the screenshot status of each screen projection window.
  • the first user operation may be a user operation acting on the second electronic device, which may specifically include gesture triggering, control triggering, preset shortcut key triggering, and other operations; the screenshot status of the screen projection window may include selected status and unselected status. state.
  • the first user operation may also be called a triggering operation.
  • the following exemplarily shows several first user operations for the second electronic device.
  • the second electronic device is equipped with a touch pad or a touch screen, and the second electronic device can receive a first user operation through the touch pad or the touch screen, wherein the first user operation can be to act on the second Preset gestures for a trackpad or touch screen of an electronic device.
  • the preset gesture may be sliding to the left on the touch panel with three fingers.
  • the second electronic device may respond to the user operation by displaying each The screenshot status of a screen projection window; for another example, the touch screen gesture can be sliding down on the touch screen with three fingers, then the second electronic device detects that the user slides down on the touch screen with three fingers.
  • the screenshot status of each screen casting window can be displayed in response to the user operation.
  • the preset gestures can also be set by the user, which is not limited here.
  • the second electronic device may receive the first user operation through a sensor, and the first user operation may be a voice call or a knuckle tapping on the screen of the second electronic device.
  • the second electronic device can obtain the user's voice and display the screenshot status of each screen projection window when it recognizes that the user's voice meets the preset conditions; for another example, the second electronic device can determine that the user's voice is being tapped by two knuckles through the sensor.
  • the screenshot status of each screen projection window is displayed; for another example, when the first electronic device displays the screen projection image on at least two screen projection windows, the first electronic device turns on the camera to obtain the image, and performs a gesture on the obtained image.
  • Recognition when the preset gesture is recognized, the screenshot status of each screen casting window is displayed.
  • the second electronic device can receive the first user operation through related controls.
  • the second electronic device can display a screenshot control on the main window during the process of displaying multiple screen projection windows.
  • the screenshot control is used In the screenshot state of displaying multiple screen casting windows.
  • the second electronic device displays a main window, a first sub-window and a second sub-window, wherein the main window of the second electronic device displays a screenshot control; furthermore, the second electronic device responds to the screenshot control User operations, such as clicking the screenshot control with the mouse or touching the screenshot control, etc., display the screenshot status of each screen projection window.
  • the second electronic device is equipped with a keyboard, and the second electronic device can receive the first user operation through the keyboard.
  • the second electronic device can display each user operation in response to the user operation for the preset shortcut key.
  • the preset shortcut key may be CTRL+L.
  • the second electronic device may display each screen projection window in response to the CTRL+L input by the user on the keyboard.
  • the screenshot status of the window It should be noted that users can also change the preset shortcut keys, which are not limited here.
  • the first user operation may also be a user operation that acts on the first electronic device.
  • the mobile phone can send a trigger message to the second electronic device when it receives a voice call or a double-click operation.
  • the trigger message is used to instruct the display of the screenshot status corresponding to each screen projection window;
  • the second electronic device displays the screenshot status of each screen projection window.
  • the second electronic device may respond to the first user operation and display the screenshot status of the screen projection window. Shown as unselected or selected.
  • the unselected state can also be called an optional state, that is, the user can select the screen projection window through a second user operation; the second electronic device can indicate the screen projection window through the color of the screen projection window or the border of the screen projection window, etc. Screenshot status.
  • the second electronic device may indicate the screenshot status of the screen projection window through the color of the screen projection window. Taking the color of the screen projection window as gray to represent that the screen projection window is unselected as an example, assume that the screen projection window displayed by the second electronic device is as shown in Figure 8.
  • the second electronic device displays the main window, the first sub-window and the third window. Two sub-windows, wherein the main window of the second electronic device displays a screenshot control, the second electronic device can respond to the first user operation on the screenshot control and display an interface as shown in Figure 9.
  • Figure 9 The gray area is represented by a slash. It can be seen that the second electronic device displays the screenshot status of the screen projection window as unselected, and the color of all screen projection windows turns gray.
  • the second electronic device can also adjust the display of the screen projection window according to preset rules.
  • the preset rule can be to adjust the window size of all screen projection windows to the same size and display all screen projection windows equidistantly and without obstruction.
  • the projection window displayed by the second electronic device is as shown in (A) of Figure 10.
  • the distance between the three projection windows is unequal, the sizes are inconsistent, and the windows block each other; then the second electronic device is
  • the main window can be adjusted to the leftmost side, followed by the first sub-window and the second sub-window, and the sizes of the three screen projection windows can be adjusted. consistent, and the distance between the main window and the first sub-window is equal to the distance between the first sub-window and the second sub-window.
  • the preset rules can also be other rules, which are not limited here.
  • the second electronic device may also first identify the display content of all the screen projection windows; when it is recognized that the display of the screen projection window is sensitive information, the second electronic device may The screenshot status is set to a state where screenshots cannot be taken, or a prompt message is displayed.
  • the prompt message is used to remind the user that the screencast window involves sensitive information and to ask the user to reconfirm whether to take a screenshot of the screencast window.
  • the screen projection window is displayed as a user interface for entering a password, such as a login interface or a payment interface
  • the second electronic device can display prompt information.
  • the prompt information can be "This page involves sensitive information, screenshots are prohibited.”
  • the second electronic device In response to a second user operation, displays the target screen projection window as selected.
  • the second user operation is used to indicate at least two screen projection windows as the target screen projection windows.
  • the second electronic device can determine the target screen projection window from the screen projection window based on the second user operation; and further, display the target screen projection window in a selected state.
  • the second user operation may include multiple user operations.
  • the first user operation is used to display all screen projection windows as unselected, then one of the second user operations can be used to determine the screen projection window indicated by the user operation as the target screen projection window; for example, the first user operation
  • the user operation is used to display all screen projection windows as selected, then one user operation in the second user operation can be used to determine the screen projection window indicated by the user operation as a non-target screen projection window, and the rest are target screen projection windows. .
  • the second electronic device in response to the first user operation, displays the screenshot status of each screen projection window as an unselected state; further, in response to the second user operation, the second electronic device displays the target screen projection window. is in the selected state, and the second user operation is used to indicate the target screen projection window.
  • the screencasting window's screenshot status can be indicated by the color of the screencasting window.
  • the second electronic device responds to the first user operation and displays the screenshot status of each screen projection window as an unselected state.
  • the color of the screen projection window is gray to represent the unselected state.
  • the page comes with The color of is used to represent the selected state; if the second user's operation is to click on the screen casting window, the user can click on the main window and the second sub-window to determine the main window and the second sub-window as the target screen casting windows, then, the second
  • the electronic device restores the main window and the second sub-window to the color of the page as shown in Figure 8 to represent the selected state, and displays the first sub-window in gray.
  • the color represents the unselected state.
  • the second electronic device determines the screen projection image displayed in the target screen projection window when receiving the first user operation as the target image.
  • the target image includes at least two screen projection images.
  • the second electronic device may display the determined control, and the third user operation may be a user operation that acts on the determined control. Then, the second electronic device may determine, in response to the third user operation on the determination control, the screen projection image displayed by the target screen projection window when the first user operation is received as the target image. For example, when the second electronic device responds to the first user operation, it may display the control controls as shown in FIG. 11 , wherein FIG. 11 exemplarily shows that the main window and the second sub-window are determined based on the second user operation.
  • the target screen projection window furthermore, the second electronic device can respond to the user operation on the determination control and determine the screen projection image displayed by the main window and the second sub-window when receiving the first user operation as the target image.
  • the second electronic device can also receive the third user operation in other ways.
  • the third user operation can be a preset voice.
  • the second electronic device obtains the preset voice, it will receive the first user operation.
  • the projection image displayed in the target projection window is determined to be the target image.
  • the second electronic device sends a synthesis request to the first electronic device, where the synthesis request is used to request to synthesize the target image.
  • the second electronic device may generate a synthesis request, where the synthesis request includes first indication information indicating the target image; and further, send the synthesis request to the first electronic device, where the synthesis request uses Requesting the first electronic device to synthesize the target image into a screenshot image.
  • the synthesis request may further include second indication information indicating the position of the target image in the screenshot image.
  • the second electronic device may determine the order in which the user selects the target screen projection window during the second user operation as the order in which the target images are arranged; and further, generates the second indication information.
  • the second user operation includes a first sub-operation and a second sub-operation, where the first sub-operation is used to determine the main window as the target screen projection window, and the second sub-operation is used to determine the second sub-window.
  • the second electronic device For the target screen projection window, the second electronic device first receives the first sub-operation and then receives the second sub-operation; then the second electronic device can generate second instruction information, and the second instruction information is used to instruct the main window corresponding to The target image is located to the left of the target image of the second sub-window in the screenshot.
  • the first electronic device synthesizes the target images to obtain a screenshot image.
  • the first electronic device can obtain the target image based on the first indication information used to indicate the target image in the synthesis request; further, perform the synthesis on the target image according to the synthesis rules. Synthesize and get the screenshot image.
  • the synthesis rule may be to splice the target images, or it may be to splice part of the target images, or it may be to superimpose the target images, which is not limited here.
  • the synthesis rule is to splice the target images from left to right. If the current display image of the main window and the first sub-window in Figure 11 is the target image, the screenshot image generated based on the target image can be as shown in Figure 12.
  • the synthesis rule may also specify preset information included in the screenshot image, such as text or other images, and then the first electronic device synthesizes the target image and the preset information into the screenshot image.
  • preset information included in the screenshot image such as text or other images
  • the first electronic device synthesizes the target image and the preset information into the screenshot image.
  • the first electronic device can superimpose multiple screen projection images of the same size into one screenshot image.
  • the synthesis rules may include the size of the screenshot image, the size and position of the target image in the screenshot image, etc.; the synthesis rules may be preset by the first electronic device, or may be carried in the synthesis request, which is not limited here.
  • the first electronic device sends the screenshot image to the second electronic device.
  • the first electronic device may send the screenshot image to the second electronic device based on the above communication connection.
  • the second electronic device displays the screenshot image.
  • the second electronic device may receive a preview request from the first electronic device, where the preview request includes a screenshot image; furthermore, when receiving the preview request, the second electronic device generates a preview window and displays it through the preview window. Screenshot image.
  • the second electronic device may also receive a user operation on the screen capture image; and in response to the user operation, process the screen capture image.
  • the second electronic device may display relevant controls for the screenshot image, and further process the screenshot image in response to the user's user operation on the relevant control.
  • Figure 13 exemplarily shows a preview window and related controls, in which the undo control can be used to exit the screenshot or to re-determine the target image; the edit control is used to edit text on the screenshot image; the copy control is used to edit the screenshot image The screenshot image is copied to the clipboard; the download control is used to save the screenshot image to the first electronic device; the save control is used to save the screenshot image to the second electronic device.
  • the second electronic device may send a screenshot image to the first electronic device in response to a user operation on the save control; after receiving the screenshot image, the first electronic device saves the screenshot image. It should be noted that the first electronic device may also cache the screenshot image after generating the screenshot image, and save the screenshot image when the second electronic device sends a save instruction to the first electronic device.
  • the second electronic device can call the function of the first electronic device to implement the operation on the screenshot image.
  • the preview window generated by the second electronic device is used to display the operation interface of the photo editing software in the first electronic device, and the screenshot image is displayed in the display frame of the processed image in the interface.
  • the second electronic device can receive the user's operation interface. The user operates the operation interface to process the screenshot image through the existing functions of the image editing software.
  • the second electronic device can display a new sub-window in response to the user's operation, and display the interface of the photo editing software of the first electronic device on the sub-window; furthermore, the second electronic device can respond to the user taking a screenshot.
  • the image is placed on the interface of the image editing software for processing, and the screenshot image is processed by the first electronic device.
  • the second electronic device may synthesize the target image into a screenshot image. That is to say, the above steps S707 to S709 may be replaced by step S711: the second electronic device may synthesize the target image, Get the screenshot.
  • the specific implementation of synthesizing the target image into a screenshot image please refer to the description in step S708 and the above related content, and will not be described again here.
  • 14A-14D exemplarily illustrate the user interface during the screenshot process of the first electronic device.
  • FIG. 14A exemplarily shows the user interface 610 when multiple screen projection windows are displayed on the second electronic device.
  • the user interface 610 may include: a main window 611, a first sub-window 612, a second sub-window 613, a screenshot control 614A on the main window 611, a minimize control 614B, a maximize control 614C and an exit control 614D.
  • the screenshot control 614A is used to receive the first user operation;
  • the minimize control 614B is used to minimize the display of the main window 611;
  • the maximize control 614C is used to maximize the display of the main window 611;
  • the exit control 614D is used to cancel the screen projection window. display; the specific contents of the main window and sub-windows can be found in the relevant descriptions above and will not be repeated here.
  • FIG. 14A only illustrates a user interface of the second electronic device and should not be construed as limiting the embodiments of the present application.
  • the second electronic device when the second electronic device detects the first user operation on the screenshot control 614A, in response to the first user operation, the second electronic device may display the user interface 621 as shown in FIG. 14B.
  • the user interface 621 includes: a main window 611, a first sub-window 612, a second sub-window 613, a determination control 622,
  • the mask layer 623A is located on the main window 611
  • the mask layer 623B is located on the first sub-window 612
  • the mask layer 623C is located on the second sub-window 613 .
  • the shaded area in Figure 14B represents the masking layer.
  • the masking layer can cause the screen projection window to appear gray.
  • the user operation for turning on the screenshot function can also be implemented in other forms, such as through voice arousal to enable the second
  • the electronic device displays the user interface 621 as shown in Figure 14B, which is not limited in the embodiment of the present application.
  • the screen projection window with a mask layer represents the screenshot state of the screen projection window as the unselected state
  • the screen projection window without the mask layer represents the screenshot state of the screen projection window as the selected state ( That is, the interface of the screen projection window displays its own color)
  • the user can change the screenshot state of the target screen projection window to the selected state through the second user operation.
  • the second user operation includes a user operation acting on the main window 611 and a user operation acting on the second sub-window 613; furthermore, when the second electronic device detects the second user operation, it responds to the second user operation.
  • the second electronic device may display user interface 631 as shown in FIG. 14C.
  • the user interface 631 includes: a main window 611, a first sub-window 612, a second sub-window 613, a determination control 622, and a mask layer 623B located on the first sub-window 612. It can be understood that the main window 611 and the second sub-window 613 do not have a mask layer and are in a selected state and are the target screen projection windows; the first sub-window 612 has a mask layer 623B and are in an unselected state.
  • the second electronic device when the second electronic device detects the third user operation acting on the determination control 622, in response to the third user operation, the screen projection image displayed in the target screen projection window when receiving the first user operation can be determined.
  • the target image the target image is synthesized into a screenshot image; furthermore, the second electronic device can display the user interface 641 as shown in Figure 14D.
  • User interface 641 may include: preview window 642, edit control 643A, download control 643B, save control 643C, copy control 643D, and undo control 643E.
  • the preview window 642 is used to display the screenshot image
  • the editing control 643A is used to edit the screenshot image
  • the download control 643B is used to download the screenshot image to the first electronic device
  • the save control 643C is used to save the screenshot image to the second electronic device.
  • the copy control 643D is used to paste the screenshot image to the clipboard
  • the undo control 643E is used to exit the screenshot function.
  • Embodiments of the present application also provide an electronic device.
  • the electronic device includes one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, and the one or more memories are used to For storing computer program code, the computer program code includes computer instructions.
  • the electronic device When one or more processors execute the computer instructions, the electronic device causes the electronic device to perform the method described in the above embodiments.
  • Embodiments of the present application also provide a computer program product containing instructions, which when the computer program product is run on an electronic device, causes the electronic device to execute the method described in the above embodiment.
  • Embodiments of the present application also provide a computer-readable storage medium, which includes instructions.
  • the electronic device When the instructions are run on an electronic device, the electronic device causes the electronic device to execute the method described in the above embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from one computer-readable storage medium to another.
  • a website, computer, server or data center transmits to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk), etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un procédé de capture d'écran, un dispositif électronique, et un système. Le procédé comporte les étapes suivantes: un dispositif dont l'écran est reproduit établit une connexion de communication avec un dispositif reproduisant l'écran; le dispositif dont l'écran est reproduit recevant des données de reproduction d'écran en provenance du dispositif reproduisant l'écran; le dispositif dont l'écran est reproduit affichant une image de reproduction d'écran au moyen de N fenêtres de reproduction d'écran d'après les données de reproduction d'écran, N étant un entier positif non inférieur à 2; et le dispositif dont l'écran est reproduit acquiert une image de capture d'écran en réponse à une opération d'utilisateur, l'image de capture d'écran comportant des images de reproduction d'écran affichées au moyen d'au moins deux fenêtres de reproduction d'écran parmi les N fenêtres de reproduction d'écran. Au moyen de la mise en œuvre des modes de réalisation de la présente invention, un contenu affiché dans une pluralité de fenêtres de reproduction d'écran peut être acquis commodément au cours d'un processus de reproduction d'écran, ce qui améliore l'agrément d'utilisation d'un utilisateur.
PCT/CN2023/078287 2022-03-10 2023-02-25 Procédé de capture d'écran, dispositif électronique, et système WO2023169237A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210240559.6A CN116777740A (zh) 2022-03-10 2022-03-10 一种截屏方法、电子设备及系统
CN202210240559.6 2022-03-10

Publications (1)

Publication Number Publication Date
WO2023169237A1 true WO2023169237A1 (fr) 2023-09-14

Family

ID=87937176

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078287 WO2023169237A1 (fr) 2022-03-10 2023-02-25 Procédé de capture d'écran, dispositif électronique, et système

Country Status (2)

Country Link
CN (1) CN116777740A (fr)
WO (1) WO2023169237A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147026A1 (en) * 2017-05-16 2019-05-16 Apple Inc. Device, Method, and Graphical User Interface for Editing Screenshot Images
CN110908750A (zh) * 2019-10-28 2020-03-24 维沃移动通信有限公司 一种截屏方法及电子设备
CN113552986A (zh) * 2020-04-07 2021-10-26 华为技术有限公司 多窗口截屏方法、装置及终端设备
CN113655976A (zh) * 2021-08-18 2021-11-16 深圳市闪联信息技术有限公司 一种电子白板的多源投屏方法和系统
CN113741840A (zh) * 2020-09-10 2021-12-03 华为技术有限公司 多窗口投屏场景下的应用界面显示方法及电子设备
WO2022001619A1 (fr) * 2020-06-29 2022-01-06 华为技术有限公司 Procédé de capture d'écran et dispositif électronique

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147026A1 (en) * 2017-05-16 2019-05-16 Apple Inc. Device, Method, and Graphical User Interface for Editing Screenshot Images
CN110908750A (zh) * 2019-10-28 2020-03-24 维沃移动通信有限公司 一种截屏方法及电子设备
CN113552986A (zh) * 2020-04-07 2021-10-26 华为技术有限公司 多窗口截屏方法、装置及终端设备
WO2022001619A1 (fr) * 2020-06-29 2022-01-06 华为技术有限公司 Procédé de capture d'écran et dispositif électronique
CN113741840A (zh) * 2020-09-10 2021-12-03 华为技术有限公司 多窗口投屏场景下的应用界面显示方法及电子设备
CN113655976A (zh) * 2021-08-18 2021-11-16 深圳市闪联信息技术有限公司 一种电子白板的多源投屏方法和系统

Also Published As

Publication number Publication date
CN116777740A (zh) 2023-09-19

Similar Documents

Publication Publication Date Title
JP7142783B2 (ja) 音声制御方法及び電子装置
WO2021013158A1 (fr) Procédé d'affichage et appareil associé
WO2021129326A1 (fr) Procédé d'affichage d'écran et dispositif électronique
WO2020238871A1 (fr) Procédé et système de projection d'écran, et appareil associé
WO2021103981A1 (fr) Procédé et appareil de traitement d'affichage à écran divisé, et dispositif électronique
WO2021052147A1 (fr) Procédé de transmission de données et dispositifs associés
WO2021139768A1 (fr) Procédé d'interaction pour traitement de tâches inter-appareils, et dispositif électronique et support de stockage
WO2020177622A1 (fr) Procédé d'affichage d'ensemble ui et dispositif électronique
WO2021036571A1 (fr) Procédé d'édition de bureau et dispositif électronique
WO2021213164A1 (fr) Procédé d'interaction entre des interfaces d'application, dispositif électronique et support de stockage lisible par ordinateur
WO2021000881A1 (fr) Procédé de division d'écran et dispositif électronique
WO2021036770A1 (fr) Procédé de traitement d'écran partagé et dispositif terminal
WO2020062294A1 (fr) Procédé de commande d'affichage pour une barre de navigation de système, interface utilisateur graphique et dispositif électronique
WO2021082835A1 (fr) Procédé d'activation de fonction et dispositif électronique
WO2021013132A1 (fr) Procédé d'entrée et dispositif électronique
WO2022017393A1 (fr) Système d'interaction d'affichage, procédé d'affichage, et dispositif
WO2020238759A1 (fr) Procédé d'affichage d'interface et dispositif électronique
WO2020107463A1 (fr) Procédé de commande de dispositif électronique et dispositif électronique
WO2020155875A1 (fr) Procédé d'affichage destiné à un dispositif électronique, interface graphique personnalisée et dispositif électronique
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
WO2024001810A1 (fr) Procédé d'interaction avec le dispositif, dispositif électronique et moyen de stockage lecturisable par informateur
WO2022063159A1 (fr) Procédé de transmission de fichier et dispositif associé
WO2022143180A1 (fr) Procédé d'affichage collaboratif, dispositif terminal et support de stockage lisible par ordinateur
WO2022042769A2 (fr) Système et procédé d'interaction multi-écrans, appareil, et support de stockage
US20230236714A1 (en) Cross-Device Desktop Management Method, First Electronic Device, and Second Electronic Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765815

Country of ref document: EP

Kind code of ref document: A1