CN116774870A - Screen capturing method and device - Google Patents

Screen capturing method and device Download PDF

Info

Publication number
CN116774870A
CN116774870A CN202310511819.3A CN202310511819A CN116774870A CN 116774870 A CN116774870 A CN 116774870A CN 202310511819 A CN202310511819 A CN 202310511819A CN 116774870 A CN116774870 A CN 116774870A
Authority
CN
China
Prior art keywords
screen
screen capturing
electronic device
coordinates
capturing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310511819.3A
Other languages
Chinese (zh)
Inventor
秦国昊
杨晓易
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310511819.3A priority Critical patent/CN116774870A/en
Publication of CN116774870A publication Critical patent/CN116774870A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Abstract

The embodiment of the application provides a screen capturing method which is applied to second electronic equipment and comprises the following steps: establishing a first communication connection with a first electronic device; displaying a collaborative interface of the first electronic device; responding to a first operation of a user, and displaying a screen capturing area selection interface based on the collaborative interface; in response to a second operation by the user, coordinate data representing the region of the partial screen capture is transmitted to the first electronic device. Therefore, the interface of the first electronic device can be subjected to local screen capturing operation on the second electronic device, so that the flow of local screen capturing can be simplified, the interface can be matched with the logic of multi-screen cooperation, the convenience of screen capturing operation is improved, and better user experience is obtained.

Description

Screen capturing method and device
The application relates to a split application of an application patent application with the name of screen capturing method and device, which is filed to China patent office, the application number is 202210251369.4, the application date is 2022, 3 and 15.
Technical Field
The present application relates to the field of communications technologies, and in particular, to a screen capturing method and apparatus.
Background
Screen capture is a common function of electronic devices, and generally refers to saving content being displayed on a screen of an electronic device as an image. For example, a mobile phone screen capture refers to saving content being displayed on a mobile phone screen as an image.
The current screen capturing function needs to be improved.
Disclosure of Invention
The application provides a screen capturing method and device, and aims to solve the problem of how to improve the screen capturing function.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a screen capturing method, applied to a first electronic device, the method comprising: establishing a first communication connection with a second electronic device, displaying a collaborative interface and a screen capture control of the second electronic device based on the first communication connection, displaying a screen capture area selection interface based on the collaborative interface in response to a first operation of the screen capture control, and transmitting first screen capture data to the second electronic device through the first communication connection in response to a second operation of the screen capture area selection interface, wherein the first screen capture data comprises a first screen capture type parameter and coordinate data, the first screen capture type parameter represents a local screen capture, and the coordinate data represents an area of the local screen capture. Therefore, the interface of the second electronic device can be subjected to local screen capturing operation on the first electronic device, so that the flow of local screen capturing can be simplified, the interface can be matched with the logic of multi-screen cooperation, the convenience of screen capturing operation is improved, and better user experience is obtained.
In some implementations, before the transmitting the first screenshot data to the second electronic device over the first communication connection, further comprising: and acquiring coordinates of the local screen capturing area selected by the second operation, and acquiring the coordinate data based on the ratio of the coordinates of the local screen capturing area to the resolution of the first electronic equipment. Because the resolutions of the first electronic device and the second electronic device may be different, the coordinate is corrected based on the resolution of the first electronic device, so that the area intercepted by the second electronic device is closer to the actual requirement of the user, namely, the accuracy of the screen capturing result is improved.
In some implementations, the obtaining coordinates of the region of the partial screen capture selected by the second operation includes: and responding to the region selection operation of the region selection interface, acquiring coordinates of a first screen capturing region, determining that the first screen capturing region comprises an incomplete first object, displaying a second screen capturing region obtained by correcting the first screen capturing region, wherein the second screen capturing region comprises the complete first object, and taking the coordinates of the second screen capturing region as the coordinates of the region of the partial screen capturing, so as to realize an automatic fitting function and further improve the convenience of the screen capturing operation.
In some implementations, taking the coordinates of the second screenshot region as the coordinates of the region of the partial screenshot includes: and responding to the operation of selecting the second screen capturing area by a user, and taking the coordinates of the second screen capturing area as the coordinates of the area of the partial screen capturing. Under the function of 'automatic fitting', the area including the incomplete first object is the screen capturing area expected by the user, so that under the condition that the user selects the 'automatic fitting' area, the coordinates of the 'automatic fitting' area are used as the coordinates of the area of the local screen capturing, and the user experience is further improved.
In some implementations, the screen capture control includes a region screen capture control.
In some implementations, the screen capture control further includes a screen capture control. The method further comprises the steps of: and in response to operation of the screen capture control, transmitting second screen capture data to the second electronic device through the first communication connection, the second screen capture data including a second screen capture type parameter, the second screen capture type parameter representing a full screen capture. The full screen capturing and the area capturing are integrated under the same capturing logic, so that a new full screen capturing mode can be provided, and a user can conveniently understand the capturing operation mode under the multi-screen cooperative mode.
In some implementations, the displaying the collaborative interface of the second electronic device and the screen capture control includes: and displaying a collaborative interface of the second electronic device, wherein the collaborative interface comprises a display area and a control area, the display area displays the same interface with the second electronic device, and the control area comprises the screen capturing control. Because the control area does not exist on the second electronic device, the realizability and rationality of the screen capturing operation can be improved.
In some implementations, after the displaying the collaborative interface of the first electronic device and the screen capture control, the method further includes: and responding to the operation of the screen capturing control, and displaying an area screen capturing control. That is, the regional screen capturing control is the next level control of the screen capturing control, so that other next level controls, such as a full screen capturing control, are arranged on the screen capturing control, and therefore various screen capturing functions are integrated on the screen capturing control.
A second aspect of the present application provides a screen capturing method applied to a second electronic device, the method comprising: the method comprises the steps of establishing first communication connection with first electronic equipment, transmitting display data to the first electronic equipment based on the first communication connection, wherein the display data are used for the first electronic equipment to display a cooperative interface of second electronic equipment, receiving screen capturing data transmitted by the first electronic equipment through the first communication connection, responding to the screen capturing data to represent local screen capturing of the cooperative interface, and carrying out local screen capturing on the second electronic equipment based on coordinate data included in the screen capturing data. Therefore, the screen capturing of the second electronic device can be realized based on the operation of the first electronic device, the flow of local screen capturing can be simplified, the screen capturing operation can be matched with the logic of the multi-screen collaboration, the convenience of the screen capturing operation is improved, and therefore better user experience is obtained.
In some implementations, the locally capturing the second electronic device based on the coordinate data included in the capture data includes: and converting coordinate data included in the screen capturing data into screen capturing coordinates based on the resolution of the second electronic equipment, and carrying out local screen capturing on the second electronic equipment by using the screen capturing coordinates. Because the resolutions of the first electronic device and the second electronic device may be different, the coordinate is corrected based on the resolution of the second electronic device, so that the area intercepted by the second electronic device is closer to the actual requirement of the user, namely, the accuracy of the screen capturing result is improved.
In some implementations, before the locally capturing the second electronic device using the capture coordinates, the method further includes: and determining that the screen capturing area represented by the screen capturing coordinates comprises an incomplete first object, correcting the screen capturing coordinates, and the screen capturing area represented by the corrected screen capturing coordinates comprises the complete first object so as to realize an automatic fitting function and further improve the convenience of screen capturing operation.
In some implementations, the method further comprises: and responding to the screen capturing parameter to represent a global screen capturing, and carrying out full-screen capturing on the second electronic device. The full screen capturing and the area capturing are integrated under the same capturing logic, so that a new full screen capturing mode can be provided, and a user can conveniently understand the capturing operation mode under the multi-screen cooperative mode.
In some implementations, further comprising: and saving the captured image so as to facilitate the user to view the captured image at the second electronic device.
In some implementations, the second electronic device includes a Media Store module. The step of saving the captured image comprises the following steps: and calling the Media Store module to Store the image obtained by the screen capturing. The method aims at ensuring the compatibility with the screen capturing and saving address of the second electronic equipment and facilitating the subsequent checking and operation of a user.
A third aspect of the present application provides an electronic apparatus comprising: one or more processors, and one or more memories. The memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the screen capture method provided in the first or second aspect of the application.
A fourth aspect of the application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor causes the processor to perform the screen capture method provided in the first or second aspect of the application.
A fifth aspect of the application provides a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform the screen capture method provided in the first or second aspect of the application.
Drawings
FIG. 1a is an exemplary diagram of a multi-screen collaboration scenario;
FIG. 1b is an example of a full screen capture operation in a multi-screen collaborative scene;
FIG. 1c is a diagram of an example of the result of a full screen capture in a multi-screen collaborative scene;
FIG. 2 is a flowchart of a screen capturing method according to an embodiment of the present application;
3 a-3 e are exemplary diagrams related to screen capturing operations during execution of the screen capturing method disclosed in embodiments of the present application;
FIG. 3f is an exemplary diagram illustrating an effect of implementing partial screen capturing by the screen capturing method disclosed in the embodiment of the present application;
FIG. 3g is a diagram illustrating an example of a screen capture image viewing operation according to the screen capture method disclosed in the embodiment of the present application;
fig. 4 is a diagram illustrating a structure of an electronic device according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an exemplary software framework of an electronic device implementing the screen capture method disclosed in an embodiment of the present application;
fig. 6a and 6b are flowcharts of still another screen capturing method disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in embodiments of the present application, "one or more" means one, two, or more than two; "and/or", describes an association relationship of the association object, indicating that three relationships may exist; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The plurality of the embodiments of the present application is greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first," "second," and the like are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance, or alternatively, for indicating or implying a sequential order.
Fig. 1a is an example of a multi-screen collaboration scenario, in which a mobile phone 1 and a tablet computer 2 display a collaboration window a of the mobile phone 1 on the tablet computer 2 in a multi-screen collaboration mode, and the display content of the collaboration window a is the same as the current display content of the mobile phone 1, as shown in fig. 1a, the mobile phone 1 currently displays a desktop, and the same desktop is also displayed in the collaboration window a of the tablet computer 2.
The user can operate in the interface of the mobile phone 1 or the collaborative window a of the tablet computer 2, no matter which device is operated, the collaborative window a and the mobile phone 1 display the same content and effect in the operation process.
For example, in the collaborative interface a of the tablet computer 2, the user performs a touch operation, such as sliding to the left, to realize that the current desktop of the mobile phone 1 in the collaborative interface a slides to the left so as to switch to the next desktop, and correspondingly, the current desktop is displayed in the mobile phone 1 and also slides to the left so as to switch to the next desktop.
For another example, when the user clicks on an image file in a folder, such as a gallery, in the collaboration interface a of the tablet pc, the image file clicked by the user is opened and displayed at the same time (from the perspective of the user experience) in the interface between the collaboration interface a and the mobile phone 1.
It will be appreciated that, in addition to the scenario in which the collaboration interface a shown in fig. 1a is the same as the interface currently displayed by the mobile phone, in the multi-screen collaboration mode, it is also possible to open multiple applications in the collaboration interface a in the tablet 2, that is, the interface in which the tablet displays multiple applications (which may be understood as multiple collaboration interfaces), in which case the mobile phone 1 displays the application interface currently operated (e.g., selected) in the tablet 2. It is also possible that the applications on the mobile phone are operated on the tablet pc 2 through the collaborative interface, while the mobile phone 2 is in the off-screen state. In summary, the present embodiment does not limit an interface between the collaboration devices in the multi-screen collaboration scenario.
The screen capturing is a common operation in the electronic device, and in the multi-screen collaboration mode shown in fig. 1a, there is also a need to capture the screen of the content displayed in the mobile phone 1. To achieve this requirement, screen shots are typically implemented on the handset 1 in the form of gestures, physical keys or virtual keys.
Taking fig. 1b as an example, the user double clicks with a finger joint on the screen of the display interface of the mobile phone 1 to realize the screen capturing of the currently displayed interface. As shown in fig. 1c, the operation result in fig. 1B is that the mobile phone 1 and the tablet computer 2 are in the multi-screen collaboration mode, so that the same screen capturing result as that in the interface of the mobile phone 1 is displayed in the collaboration interface a in the tablet computer 2, that is, after the screen capturing in the mobile phone 1, the screen capturing result as shown in B1 is displayed, and the same screen capturing result as that in B1 as shown in B2 is also synchronously displayed in the collaboration interface a.
The inventors found in the course of research that in the multi-screen collaborative mode, screen capturing has the following problems:
1. only full screen capturing of the mobile phone interface can be realized, as shown in fig. 1c, the result of the screen capturing is that the image of all the contents currently displayed on the screen of the mobile phone 1 is captured, but the screen capturing of the local area cannot be realized.
2. Even if the collaborative device of the mobile phone, such as the tablet computer 2, has a screen capturing function, a screen capturing image (of a region such as a collaborative interface) can be obtained, but a user needs to perform a synchronous operation, for example, the user sends the screen capturing image to the mobile phone, so that the screen capturing image can be synchronized to the mobile phone, and therefore, the mobile phone can obtain the screen capturing image in the collaborative mode, and a space for simplifying a process so as to improve convenience is provided.
3. The collaborative interface displayed by a collaborative device such as tablet 2 is typically not a full screen display, and therefore, even if a local screen capture is implemented on the collaborative interface using the screen capture function of tablet 2, it may not be easy to select a screen capture area at a time because of being difficult to operate.
4. The multi-screen collaborative mode realizes the problem of compatibility between the screen capturing of the mobile phone and the screen capturing function of the mobile phone in the collaborative equipment of the mobile phone.
The screen capturing method disclosed by the embodiment of the application aims to solve the problems under the multi-screen collaborative mode, and further can realize full screen capturing of the collaborative interface in a new mode under the condition of capturing the screen of the local area in the collaborative interface.
The screen capturing method disclosed by the embodiment of the application has the following application scenes: the first electronic device (e.g., the mobile phone 1 shown in fig. 1 a) and the second electronic device (e.g., the tablet computer 2 shown in fig. 1 a) are in a sharing mode, wherein the sharing mode includes, but is not limited to: a multi-screen collaboration mode, a screen-casting mode, a screen expansion mode, and a remote assistance mode. For convenience of explanation, these sharing modes are collectively referred to as a multi-screen cooperative mode hereinafter. The collaboration interface described in the embodiments of the present application may also be referred to as a shared interface, a screen-drop interface, and the like.
And displaying the collaborative interface of the first electronic device in the second electronic device. The collaborative interface may be understood as an interface in the second electronic device that displays the same content in synchronization with the first electronic device, such as the collaborative interface a shown in fig. 1 a-1 c.
As mentioned above, the interface displayed between the cooperating devices may be different, or there may be a device screen, except that the cooperating interface a shown in fig. 1a is the same as the interface currently displayed by the handset. In summary, the present embodiment does not limit an interface between the collaboration devices in the multi-screen collaboration scenario.
It will be appreciated that the manner in which a sharing mode such as multi-screen collaboration is established includes, but is not limited to: after the first device approaches to the second device, the first device triggers and establishes a multi-screen cooperative mode through an NFC mode, or establishes the multi-screen cooperative mode with the first device through a multi-screen cooperative virtual key trigger on the second device, or establishes a multi-screen system mode through a wired connection mode between the first electronic device and the electronic device.
In a sharing mode, such as a multi-screen cooperative mode, the communication manner between the first device and the second device includes, but is not limited to, at least one of the following: 5 th generation mobile communication Point-to-Point (5G Point-to-Point), second generation mobile communication (2G) Point-to-Point, 5 th generation mobile communication wireless local area network (5G Wireless Local Area Networks,WLAN), 2G WLAN, ethernet (ETH), bluetooth low energy (Bluetooth Low Energy, BLE), wireless fidelity (Wireless Fidelity, WIFI), and universal serial bus (Universal Serial Bus, USB) based wired connection.
Fig. 2 is a flowchart of a screen capturing method according to an embodiment of the present application, and still uses a first electronic device as the mobile phone 1 shown in fig. 1 a-1 c, and a second electronic device as the tablet computer 2 shown in fig. 1 a-1 c as an example.
The following steps are included in fig. 2:
s1, establishing multi-screen cooperative connection between a mobile phone 1 and a tablet personal computer 2.
The triggering manner of the multi-screen cooperative connection is as described above, and is not described herein. Establishing a multi-screen cooperative connection may be understood as establishing a link in a multi-screen cooperative mode, and the connection manner (i.e., the type of the link) includes bluetooth, etc. as described above, which is not described herein.
It can be appreciated that after the multi-screen cooperative connection is established between the mobile phone 1 and the tablet computer 2, an example of the display of the mobile phone 1 and the tablet computer 2 can be seen in fig. 1 a.
S2, the tablet personal computer 2 displays a screen capturing control on the collaborative interface.
The collaboration interface refers to an interface that, in a first electronic device (such as a mobile phone 1 shown in fig. 1 a-1 c) and a second electronic device (such as a tablet computer 2 shown in fig. 1 a-1 c) that establish a multi-screen collaboration mode, synchronously displays the same content in one electronic device (such as the tablet computer 2 shown in fig. 1 a-1 c) and another electronic device (such as the mobile phone 1 shown in fig. 1 a-1 c). An example of a collaboration interface is shown as collaboration interface a in fig. 1 a-1 c.
The screen capturing control is a control set for realizing full screen or partial area screen capturing of the collaborative interface in the multi-screen collaborative mode in the embodiment. Types of screen capture controls include, but are not limited to, virtual buttons, physical buttons, gestures, or the like. The number of the screen capturing controls can be multiple, and partial area screen capturing and full screen capturing are respectively realized. The types of the plurality of screen capture controls may be different, in which case the number of screen capture controls displayed may be one or more. The screen capturing controls can be displayed in parallel or in the form of upper and lower menus.
In the case that the screen capturing control is a virtual key, the screen capturing control can be displayed on the collaborative interface or can be displayed outside the collaborative interface, and the display style is not limited. The screen capture control is illustrated below in connection with fig. 1 a-1 c:
as can be seen from further analysis of the collaborative interface a illustrated in fig. 1 a-1 c, as illustrated in fig. 3a, the collaborative interface a includes two parts, one part is the same interface A1 (which may be referred to as a display area) as the interface currently displayed by the mobile phone 1, the other part is a top virtual key area and a bottom virtual key area, denoted as A2 (which may be referred to as a control rest), and the title "multi-screen collaboration" and the virtual key are displayed in A2. The virtual key is used for realizing control over the A1 area.
In this embodiment, besides the original content in the collaborative interface, a screen capturing control is also displayed in the collaborative interface. Taking fig. 3a as an example, a screen capture control 201 is also displayed in the collaborative interface a.
It can be appreciated that, in fig. 3a, taking the example of the virtual key area of the screen capturing control 201 displayed at the top of the collaborative interface a as an example, in addition to this, the screen capturing control 201 may also be displayed in other areas of the collaborative interface a such as the bottom virtual key area, or may be displayed outside the collaborative interface, and in this embodiment, the display position of the screen capturing control is not limited.
As described above, the purpose of this embodiment is to achieve the screen capturing of the local area in the collaborative interface, and also to take account of the full screen capturing of the collaborative area, and for this purpose, in some implementations, a next-level control is set for the screen capturing control 201. Taking fig. 3b as an example, after the user clicks on the screen capture control 201, a next level control screen capture 2011 and a region screen capture 2012 are displayed. The screen capture 2011 is used to realize a full screen capture of the collaborative interface a, and the area screen capture 2012 is used to realize a screen capture of a local area in the collaborative interface a.
It is to be understood that the patterns of the screen capture control and the patterns of the next level of control of the screen capture control described herein are by way of example only and not by way of limitation.
In other implementations, other than the example shown in fig. 3b, in other implementations, the screen capture control 201 is not provided with a next level control, that is, the screen capture control 201 is used to implement a screen capture of a local region in the collaborative interface a. In this case, the full screen capturing of the collaborative interface a may be implemented in the manner shown in fig. 1b, and the like, which is not described herein.
When the user clicks on the area screen capture 2012 or the full screen capture 2011, the following steps are triggered:
s3, the tablet computer 2 receives an instruction triggered by clicking the screen capturing control 201 by a user.
In some implementations, the instructions include, but are not limited to, information of a screen capture control clicked by the user, such as an identification of the area screen capture 2012 or the screen capture 2011, namely: the user clicks the screen capture 2011, the instruction includes an identification of the area screen capture 2011, the user clicks the screen capture 2012, and the instruction includes an identification of the area screen capture 2012.
S4, the tablet personal computer judges whether the instruction indicates partial screen capturing, if yes, S5-S8 are executed, and if no, S9-S10 are executed.
In some implementations, based on the identification in the instruction, it is determined whether the instruction indicates a partial screen capture.
S5, the panel computer sets the screen capturing type parameter as 1.
The screen capturing type parameter is used to indicate a full screen capturing or a partial screen capturing, in this embodiment, a partial screen capturing is denoted by 1, but it will be understood that the specific value of the screen capturing type parameter may be set according to the requirement, and the value of this step is merely for example and not for limitation.
S6, the tablet personal computer 2 obtains coordinates of the vertex of the screen capturing area based on the area selection operation of the user.
It will be appreciated that after the user clicks on the region screen shot 2012, the tablet computer 2 acquires the region of the screen shot based on the user's operation. In some implementations, a mask is displayed in the collaborative interface A to guide a user in selecting a region of a screen capture in the mask. With reference to fig. 3C, a gray mask layer C is displayed on the A1 region of the collaborative interface, and the user may select a screen capturing region on the mask layer C by touch control or the like. Further, after the mask layer C is displayed, a prompt message "please select area" may be displayed to further guide the user to select the screen capturing area, so as to further optimize the user experience. The pattern of the mask in fig. 3C, such as color, is not limited, and for example, the mask has a frame, and the area surrounded by the frame is transparent.
It should be noted that, since the object to be achieved in the present embodiment is to screen-capture the A1 region of the collaborative interface, the user may select only the screen-capture region in A1, and may not select the screen-capture region in A2. As shown in connection with fig. 3C, the cover layer C covers A1 but does not cover A2.
It will be appreciated that the cover layer C may also cover A2, i.e. allow the user to screen the A2 area, in which case the tablet 2 may screen the screen-capturing area selected by the user to include all or part of the A2 area and transmit the screen-capturing image to the mobile phone 1, since there is no corresponding area on the mobile phone 1.
It will be appreciated that after the user selects the region of the screen capture, the tablet 2 obtains the coordinates of the vertices of the region of the screen capture. To simplify subsequent computations, in some implementations, the area of the screen capture may be defined as rectangular. Taking fig. 3D as an example, the user may select only a rectangular area in the mask layer C shown in fig. 3C, and assume that the rectangular area selected by the user is the area D shown in fig. 3D, in one example, in fig. 3D, the upper left corner of A1 in the collaborative interface a is taken as the origin, in this case, the vertex includes two vertices of the rectangular area D, and, as shown in fig. 3D, the upper left vertex (x 1, y 1) and the lower right vertex (x 2, y 2) of the rectangular area D are taken as an example.
In other implementations, the screen capture area is not limited to a rectangle, but may be any shape, in which case the vertices may be determined by the shape of the user-selected area. In this embodiment, the shape, the selection method, the vertex position of the screen capturing area where coordinates are acquired, and the like of the screen capturing area are not limited.
It will be appreciated that the accuracy of the user selection of the screen capture area may not be high, for example, assuming that the user expects to select three icons in the rectangular area D as shown in fig. 3D, but for some reason, the screen capture area selected by the user as determined by the tablet computer 2 after the user operation is shown as D1 in fig. 3e, i.e. the user does not fully select the icon 202. In this case, if the user wants to realize the screen capturing of the desired local area D, he needs to select again on the mask layer C until three icons in the area D are selected.
In order to further enhance the user experience, in this embodiment, a specific implementation manner of S6 is (not shown in fig. 2):
61. after the user-selected screenshot area is obtained, it is determined whether the screenshot area includes an incomplete icon, if yes, 62-64 is performed, and if not, the vertices of the user-selected screenshot area, such as (x 1, y 1) and (x 2, y 2) shown in fig. 3d, are obtained.
As described above, taking the screen capturing area selected by the user as the area D1 in fig. 3e as an example, it is determined that a part of the icons 202 are included in the area D1, but not all of them, that is, incomplete icons are included in the area D1.
In some implementations, whether an incomplete icon is included in the screenshot region is determined based on the coordinates of the icon and the coordinates of the vertices of the screenshot region (e.g., region D1) selected by the user.
It will be appreciated that the icons are just one example, and that "icons" may be replaced with the time components "10:09" shown in FIG. 3d, etc., i.e., "icons", "components", "content", etc. may be summarized as: any objects displayed in the collaboration interface.
62. The screen capture area is modified such that the modified screen capture area includes the complete icon therein.
Take the example of the corrected screen capture area as area D shown in fig. 3D.
In some implementations, the correction is based on coordinates of vertices of the screen capture area (e.g., area D1) selected by the user and coordinates of the icon such that the icon is included in the corrected screen capture area.
63. Based on the user's operation, one region is selected as the target region from the screen capturing region selected by the user (e.g., region D1) and the corrected screen capturing region (e.g., region D).
In some implementations, after correcting the screenshot area, query information is displayed asking the user whether to select the corrected screenshot area, if the user selects the corrected screenshot area, the corrected screenshot area (e.g., area D) is taken as the target area, and if the user does not select the corrected screenshot area, the user-selected screenshot area (e.g., area D1) is taken as the target area, again according to the user's operation.
In other implementations, instead of displaying the query information, the screen capturing area (e.g., the area D1) selected by the user operation and the corrected screen capturing area (e.g., the area D) may be displayed together on the mask layer, and the user selects between the two areas by touch, for example, selects the area D1 between the two areas, so that the area D1 selected by the user serves as the target area.
64. And taking the coordinates of the vertexes of the target area as the coordinates of the vertexes of the acquired screen capturing area.
The above manner of correcting the screen capture area may be simply referred to as "auto-lamination".
It can be appreciated that another specific implementation of S6 is: the coordinates of the vertices of the screen capturing area (e.g., area D1) selected by the user operation are directly acquired without performing "automatic fitting".
S7, the tablet personal computer 2 converts the coordinates of the vertex into relative coordinates based on the resolution of the tablet personal computer 2.
Because the resolution of the tablet pc 2 may be different from the resolution of the mobile phone 1, in order to achieve the accuracy of the screen capturing result after the screen capturing on the mobile phone 1, in this step, the coordinates need to be converted.
In some implementations, assume that the resolution of the tablet is: width (width), height (height), and also taking the coordinates of the vertices as (x 1, y 1) and (x 2, y 2) shown in fig. 3d as examples, the converted relative coordinates are:
(x 1/width, y 1/height) and (x 2/width, y 2/height).
It should be understood that the execution sequence between S5 and S6-S7 shown in fig. 2 is merely an example, and S5 and S7 may be executed in parallel, or S6-S7 may be executed first and S5 may be executed second, which is not limited herein.
S8, the tablet personal computer 2 packages the screen capturing type parameter and the relative coordinates into Byte objects.
It is understood that the specific format and encapsulation of Byte objects can be found in the data encapsulation protocol.
S9, the panel computer 2 sets the screen capturing type parameter as-1.
In the present embodiment, a full screen capture is indicated by-1, but it will be understood that the specific values of the capture type parameters may be set as required, and the values of this step are merely exemplary and not limiting.
S10, the tablet personal computer 2 packages the screen capturing type parameter into a Byte object.
The specific data format of the Byte object may be referred to as a data protocol, and will not be described in detail herein. It will be appreciated that other data formats may be packaged, without limitation.
S11, the tablet personal computer 2 transmits the Byte object to the mobile phone 1.
It can be understood that the data is transmitted between the tablet pc 2 and the mobile phone 1 through the link established in S1.
S12, the mobile phone 1 analyzes the Byte object to obtain an analysis result.
It will be appreciated that the parsing result may be the screenshot type parameter-1, and may also be the screenshot type parameter 1 and the relative coordinates. And if the screenshot type parameter in the analysis result is 1, executing S13, and if the screenshot type parameter in the analysis result is-1, executing S14.
S13, the mobile phone 1 converts the relative coordinates into screen capturing coordinates based on the resolution of the mobile phone 1.
As described above, the relative coordinates transmitted by the tablet 2 are (x 1/width, y 1/height) and (x 2/width, y 2/height), and the resolution of the mobile phone 1 is assumed to be: width (source width), height (source height), the screen capturing coordinates obtained by conversion in this step are (x 1/width, y1/height, source/height) and (x 2/width, y2/height, source/height).
It will be appreciated that after S13, the mobile phone 1 may also perform the following "auto-attach" procedure (not shown in fig. 2):
131. and judging whether the screen capturing area represented by the screen capturing coordinates comprises incomplete objects according to the screen capturing coordinates and the coordinates of each object currently displayed by the mobile phone 1, if so, executing 131-134, and if not, executing S15.
An object refers to any object displayed in a collaboration interface, such as an "icon", "component", "content", or the like. An example of including incomplete objects in the screenshot area can be seen in area D1 of fig. 3 e.
132. The screen capture coordinates are modified such that the screen capture area represented by the modified screen capture coordinates includes the complete object.
The corrected screen capture area represented by the corrected screen capture coordinates can be seen in area D shown in fig. 3D.
In some implementations, the correction is based on coordinates according to the object and screen capture coordinates.
133. Based on the user's operation, one region is selected as the target region from the screen capturing region (e.g., region D1) represented by the converted screen capturing coordinates and the screen capturing region (e.g., region D) represented by the corrected screen capturing coordinates.
In some implementations, after obtaining the corrected screen capture coordinates, a screen capture area represented by the corrected screen capture coordinates (e.g., area D) is displayed, and the screen capture area represented by the converted (i.e., pre-corrected) screen capture coordinates (e.g., area D1) is selected by the user in a touch-sensitive manner, for example, by selecting area D1 between the two, and then the user-selected area D1 is used as the target area.
It can be understood that, because the mobile phone 1 and the tablet computer 2 are in multi-screen cooperative connection, the display of the mobile phone 1 and the tablet computer 2 are completely synchronous and identical, so that the user can select on the interface of the mobile phone 1 or on the cooperative interface of the tablet computer 2.
134. And taking the coordinates of the vertexes of the target area as screen capturing coordinates.
It is understood that 131-134 are optional steps.
S14, the mobile phone 1 takes the coordinates of the top point of the display interface of the mobile phone 1 as screen capturing coordinates.
The display interface of the mobile phone 1 is an interface displayed in full screen in the mobile phone 1. In some implementations, the vertices of the screen of the mobile phone 1 are an upper left vertex and a lower right vertex to reduce the subsequent calculation amount.
After S13 or S14, S15 is performed.
S15, the mobile phone 1 uses the screen capturing coordinates to capture a screen, and a screen capturing image is obtained.
It can be appreciated that the mobile phone 1 can transmit the screen capturing coordinates to an interface of a screen capturing Application (APP) in the mobile phone 1 to invoke the screen capturing application to capture a screen based on the screen capturing coordinates, so as to ensure compatibility with an original screen capturing function of the mobile phone.
In order to keep consistent with the user experience of multi-screen collaboration, in some implementations, after the mobile phone 1 performs the screen capturing operation to obtain a screen capturing image, in the mobile phone 1 and the collaboration interface a, the screen capturing effect is synchronously displayed, taking fig. 3f as an example, after the user intercepts the area D in the mask layer shown in fig. 3D and the mobile phone 1 performs S15, in the mobile phone 1 and the collaboration interface a, the screen capturing results E1 and E2 are synchronously displayed.
As can be seen by comparing fig. 3f with fig. 1c, the present embodiment achieves local screen capturing of a mobile phone interface in a multi-screen collaboration mode, and the local screen capturing experience and the user experience of the multi-screen collaboration mode can maintain high consistency.
S16, the mobile phone 1 stores the screen capturing image.
It will be appreciated that the screen capturing results for the full screen or local area of the interface of the mobile phone have been stored in the mobile phone 1 so far, in which case, as shown in fig. 3g, the user may view the screen capturing results in the mobile phone, for example, from the gallery F1 of the mobile phone 1, and may view the screen capturing results in the collaborative interface a, for example, by starting the gallery F2 from the screen capturing interface a, and viewing the screen capturing results from the gallery F2.
It will be appreciated that in accordance with the logic of the multi-screen collaboration mode, because the screen capture step is performed by the handset 1, in some implementations the screen capture image is not saved in the tablet 2. In other implementations, however, the screen capture image may also be stored in the tablet 2. The present embodiment is not limited.
As can be seen from the flow shown in fig. 2, in the multi-screen collaborative manner, the screen capturing of the local area of the mobile phone can be realized through the operation of the device side such as the tablet personal computer. Furthermore, the full screen capturing of the mobile phone can be realized in a new mode.
It should be emphasized that, besides realizing the screen capturing of the local area of the mobile phone in the multi-screen collaboration mode, more importantly, the screen capturing method in the embodiment optimizes the experience of the user in the multi-screen collaboration scene for the following reasons:
The multi-screen collaborative scene has the advantages that the first equipment can be operated on the second equipment, so that the operation of the first equipment and the second equipment is integrated on the second equipment, and better operation experience is provided for a user.
Taking fig. 1a as an example, a user can operate the mobile phone on the tablet computer 2 through the collaboration interface a, and can also consider the operation on the tablet computer 2. In this case, if the user has a need for content screen capturing displayed on the mobile phone 1, the screen capturing operation is transferred from the interface of the tablet computer 2 to the mobile phone 1, as shown in fig. 1b, but the screen capturing operation can be directly performed on the tablet computer 2, and the full screen or partial screen capturing operation is performed on the A1 in the collaborative interface, taking fig. 3 b-3 f as an example. That is, the full screen or partial screen capturing operation of the mobile phone interface is also transferred to the collaborative device of the mobile phone 1 for execution, so that the user does not need to return to the mobile phone operation due to the screen capturing requirement when operating the mobile phone at the device side, and the screen capturing result of the mobile phone interface can be stored in the mobile phone through the screen capturing operation at the device side, thereby optimizing the experience of the user.
The screen capturing method shown in fig. 2 will be described in more detail below with respect to the hardware and software structures of the first electronic device (e.g. the mobile phone 1) and the second electronic device (e.g. the tablet computer 2) described in the above embodiments.
The first electronic device and the second electronic device may be tablet computers, PCs, ultra-mobile personal computer (UMPC), vehicle-mounted devices, netbooks, personal digital assistants (personal digital assistant, PDA) and other electronic devices with various near field communication functions, and the embodiment of the present application does not limit the specific types of the electronic devices.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Taking the example of the electronic device being a cell phone 1, the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a subscriber identity module (subscriber identification module, SIM) card interface 195, and the like.
The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The wireless communication function of the electronic device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied on an electronic device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. for application on an electronic device. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, the antenna 1 and the mobile communication module 150 of the electronic device are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the electronic device can communicate with the network and other devices through wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
It should be understood that the structure of the electronic device is not particularly limited by the embodiment of the present application, except for the various components or modules illustrated in fig. 4. In other embodiments of the application, the electronic device may also include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The operating system runs on the electronic device shown in fig. 4, and the operating system may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
Taking a layered architecture as an example, the layered architecture divides the software into a plurality of layers, and each layer has a clear role and division. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. In the embodiment of the present application, the application package related to the mobile phone 1 and the tablet computer 2 is shown in fig. 5, and specific functions will be described in the flow shown in fig. 6a and 6 b.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. Taking the handset 1 as an example, the application framework layer includes a Media Store (Media Store) module.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It can be understood that the modules in the above layers in the Android system can refer to the description of the Android system, and are not described herein. Although the embodiment of the application is described by taking an Android system as an example, the basic principle of the embodiment of the application is also applicable to electronic equipment based on an iOS or Windows and other operating systems.
In the embodiment of the application, taking the mobile phone 1 and the tablet personal computer 2 for multi-screen cooperative connection as an example, the mobile phone 1 and the tablet personal computer 2 both operate an android operating system with a layered architecture. It will be appreciated that the operating systems of the electronic devices performing near field communication may be the same or different, for example, the mobile phone 1 establishes a cooperative connection with a notebook computer, the mobile phone 1 runs an android operating system, and the notebook computer runs a Windows operating system.
Fig. 5 is a connection example of software modules of the mobile phone 1 and the tablet computer 2 in the multi-screen collaborative mode, based on fig. 5, a process of screen capturing an interface of the mobile phone 1 by the mobile phone 1 and the tablet computer 2 is shown in fig. 6a and 6b, and fig. 6a includes the following steps:
s601, multi-screen cooperative connection is established between the multi-screen cooperative connection module 14 of the mobile phone 1 and the multi-screen cooperative connection module 24 of the tablet computer 2.
In some implementations, after the multi-screen cooperative connection module 14 detects that the mobile phone 1 is connected to other electronic devices, such as the tablet computer 2, by bluetooth or NFC, the multi-screen cooperative connection module 14 finally establishes a multi-screen cooperative connection through data interaction with the multi-screen cooperative connection module 24. After the multi-screen cooperative connection is established between the mobile phone 1 and the tablet computer 2, a display example of the mobile phone 1 and the tablet computer 2 can be shown in fig. 1 a.
The types of multi-screen cooperative connections are as described above and include, but are not limited to BLE, 5 th generation mobile communication Point-to-Point (5G Point to Point), 5 th generation mobile communication wireless local area network (5G Wireless Local Area Networks,WLAN), etc.
The triggering manner of S601 and the type of the established connection link can be referred to S1, and will not be described herein.
S602, the display module 21 of the tablet personal computer 2 displays screen capturing controls on the collaborative interface.
Types of screen capture controls include, but are not limited to, virtual buttons, physical buttons, gestures, or the like. In this embodiment, the style, number, and display position of the screen capturing control are not limited.
The specific implementation manner of the collaboration interface and the screen capturing control can be see S2 and fig. 3b, which are not described herein.
It will be appreciated that after displaying the collaboration interface a as illustrated in fig. 3b on the tablet computer 2, the user clicks the screen capture control 201 in the collaboration interface a, triggering the following steps:
s603, the type parameter acquisition module 221 of the tablet personal computer 2 receives an instruction triggered by clicking the screen capturing control.
It will be appreciated that the instructions may include information such as, but not limited to, the type of screen capture control that the user clicks on, the type may be the identity of the control, etc., and are not limited herein.
S604, the type parameter acquisition module 221 of the tablet personal computer 2 judges whether the instruction indicates partial screen capturing, if yes, S605-S608 are executed, and if no, S609 is executed.
In some implementations, the basis for the determination is an identification in the instruction.
The implementation of S604 may refer to S4, and will not be described herein.
S605, the type parameter acquiring module 221 sets the screenshot type parameter to 1.
The screen capturing type parameter is used to indicate a full screen capturing or a partial screen capturing, in this embodiment, a partial screen capturing is denoted by 1, but it will be understood that the specific value of the screen capturing type parameter may be set according to the requirement, and the value of this step is merely for example and not for limitation.
S606, the type parameter obtaining module 221 transmits a relative coordinate obtaining instruction to the relative coordinate calculating module 222 of the tablet computer 2. The relative coordinate acquisition instruction functions to trigger S607.
S607, the relative coordinate calculating module 222 obtains the coordinates of the vertex of the screen capturing area obtained by the screen capturing operation of the user.
In some implementations, the screen capturing area may be the same area in the collaborative interface as the area in the mobile phone interface, or may be an area obtained by performing automatic lamination based on an area selection operation of a user. For a specific implementation of S607, reference may be made to S6, which is not described herein.
S608, the relative coordinate calculation module 222 converts the coordinates into relative coordinates based on the resolution of the tablet computer 2.
The purpose of the coordinate transformation is to shield the resolution difference between the tablet computer 2 and the mobile phone 1, so as to improve the accuracy of the screen capturing area on the mobile phone 1. It can be appreciated that the coordinates are converted based on the resolution of the tablet pc 2, and the specific conversion manner can be referred to but not limited to S7, which is not repeated here.
After S608, S610 is performed.
S609, the type parameter acquisition module 221 sets the screen capturing type parameter to-1.
In the present embodiment, a full screen capture is indicated by-1, but it will be understood that the specific values of the capture type parameters may be set as required, and the values of this step are merely exemplary and not limiting.
S610, the screen capturing parameter packaging module 223 of the tablet computer 2 receives the screen capturing type parameter-1, or the screen capturing type parameter 1 and the relative coordinates.
It will be appreciated that if S605-S608 are performed then the screenshot type parameter 1 and the relative coordinates are received in this step, and if S609 are performed then the screenshot type parameter-1 is received in this step.
S611, the screen capturing parameter packaging module 223 packages the received parameters into Byte objects.
It will be understood that if the screenshot type parameter 1 and the relative coordinates are received in S610, the screenshot type parameter 1 and the relative coordinates are encapsulated as a Byte object in this step, and if the screenshot type parameter-1 is received in S610, the screenshot type parameter-1 is encapsulated as a Byte object in this step.
The Byte object is just one example of a parameter encapsulation, and other data encapsulation formats may be used to encapsulate the received parameters.
S612, the Socket Server Connect module 131 of the tablet computer 2 responds to the instruction, and establishes Socket connection with the Socket Server Connect module 231 of the mobile phone 1.
It will be appreciated that in one implementation, the instruction described in this step is the instruction triggered by clicking the screen capture control in S603, so the instruction is already transmitted to the Socket Server Connect module 131 in S603, and thus the execution order of S612 and S604-S611 is not limited.
In another implementation, the instruction described in this step is an instruction triggered by any of steps S604-S611, which is not limited herein.
It will be appreciated that after the Socket connection is established between the Socket Server Connect module 131 and the Socket Server Connect module 231, data may be transferred between the Socket Server Send module 232 and the Socket Server Send module 132.
The Socket connection established by the Socket Server Connect module 131 and the Socket Server Connect module 231 may be, but is not limited to, a 5 th generation mobile communication Point-to-Point (5G Point to Point), a 5 th generation mobile communication wireless lan (5G Wireless Local Area Networks,WLAN), ethernet (ETH), bluetooth low energy (Bluetooth Low Energy, BLE), wireless fidelity (Wireless Fidelity, WIFI), and a wired connection based on a universal serial bus (Universal Serial Bus, USB), as described above.
It will be appreciated that the type of Socket connection established may be the same or different from the type of cooperative connection.
S613, the Socket Server Send module 232 of the tablet 2 sends the Byte object to the Socket Server Send module 132 of the handset 1.
In some implementations, socket Server Send module 232 sends the Byte object to Socket Server Send module 132 in response to receiving the Byte object encapsulated by screenshot parameter encapsulation module 223. The type of Socket link used between Socket Server Send module 232 and Socket Server Send module 132 may be as described in the previous embodiments. It is understood that the module Socket Server Send and the module Socket Server Send transfer Byte objects based on the Socket connection established in S612 between them 132.
So far, the screen capturing parameters of the tablet computer 2 are transmitted to the mobile phone 1.
Fig. 6b includes the following steps:
s614, the Socket Server Send module 132 of the mobile phone 1 transmits the Byte object to the screen capturing type parsing module 121 of the mobile phone 1.
S615, the screen capturing type analyzing module 121 analyzes the Byte object to obtain an analyzing result.
It will be appreciated that the parsing result includes the screenshot type parameter-1, or the screenshot type parameter 1 and the relative coordinates.
S616, the screen capturing type analysis module 121 transmits the analysis result to the coordinate acquisition module 122 of the mobile phone 1.
S617, the coordinate acquisition module 122 judges whether the screen capturing type parameter in the analysis result is 1.
The screen capturing type parameter of 1 represents a partial screen capturing, so if yes, S618 is performed, the screen capturing type parameter of-1 represents a full screen capturing, so if no, S619 is performed.
S618, the coordinate acquiring module 122 converts the relative coordinates into screen capturing coordinates based on the resolution of the mobile phone 1.
The purpose of the coordinate transformation is to shield the resolution difference between the tablet computer 2 and the mobile phone 1, so as to improve the accuracy of the screen capturing area on the mobile phone 1. It is understood that the coordinates are converted based on the resolution of the machine 1, and the implementation of S618 may refer to S13, which is not described herein.
In one implementation, S620 is performed after S618. It will be appreciated that in another implementation, after S618 and before S620, 131-134 are performed, i.e., a "auto-attach" procedure is performed. 131-134 may be performed by, but are not limited to, the coordinate acquisition module 122.
S619, the coordinate obtaining module 122 uses the coordinates of the vertex of the mobile phone display interface as the screen capturing coordinates.
It can be understood that the mobile phone display interface refers to a full screen interface currently displayed by the mobile phone 1. Because a full screen capture is to be made, the vertices of the interface are taken as the capture coordinates.
S620, the coordinate acquiring module 122 transmits the screen capturing coordinates to the screen capturing module 112 of the mobile phone 1.
The screen capturing module may be an original module for capturing a screen in the mobile phone 1, and the screen capturing module 112 is capable of transmitting the screen capturing coordinates to the screen capturing module to realize the screen capturing triggered on the cooperative device of the mobile phone 1 in the multi-screen cooperative mode, which is compatible with the screen capturing function of the mobile phone 1.
S621, the screen capturing module 112 captures a screen by using the screen capturing coordinates to obtain a screen capturing image.
In some implementations, the screen capture module 112 described in this embodiment is a surfecontrol. The screen capture image obtained by the surface control.
S622, the screen capturing module 112 transmits the screen capturing image to the storage module 111 of the mobile phone 1.
The storage module 111 is a module of the mobile phone 1 for storing a screen capturing image obtained by a screen capturing function of the mobile phone, so as to further realize the purpose of unifying the logic of triggering the screen capturing of the mobile phone with the original screen capturing logic of the mobile phone under the multi-screen cooperative mode.
S623, the storage module 111 transmits a save instruction to the Media Store (Media Store) module 15 of the mobile phone 1.
It will be appreciated that the screen capture image is included in the save instruction, which instructs the Media Store module 15 to Store the screen capture image. The Media Store module 15 is a module for saving the screen capturing result under the original screen capturing function of the mobile phone, and aims to ensure the compatibility with the screen capturing and saving address of the mobile phone 1, so that the subsequent viewing and operation of a user are convenient.
S624, the Media Store module 15 saves the screen shot image.
In this step, in order to be compatible with the common storage mode of the mobile phone and reduce the possibility that the user cannot view the screen capturing image from the gallery due to untimely gallery refreshing, the Media Store module 15 in the mobile phone 1 is called to Store the screen capturing image.
In some implementations, the Media Store module 15 saves the screen shot image under the screen shots directory of the mobile phone 1 to ensure compatibility with the screen shot saving address of the mobile phone 1, so as to facilitate subsequent viewing and operation of the user.
It can be understood that after storing the screenshot image, the Media Store module 1 may trigger to display a frame notification, where the content in the frame notification prompts that the screenshot image is successfully stored, so as to further enhance the user experience.
The screen capturing method shown in fig. 6a and 6b can achieve local screen capturing of the display interface of the electronic device, further, can achieve local screen capturing or full screen capturing of the first electronic device through the cooperative interface of the first electronic device displayed by the second electronic device in the multi-screen cooperative scene, therefore, not only improves the screen capturing function, but also can be combined with the advantages of multi-screen cooperation, and the user experience is obviously improved.
The embodiment of the application also discloses a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the processor is caused to execute the screen capturing method described in the embodiment.
Embodiments of the present application also disclose a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform the screen capture method as also disclosed in the embodiments of the application.

Claims (11)

1. A screen capturing method, applied to a first electronic device, the method comprising:
establishing a first communication connection with a second electronic device;
transmitting display data to the second electronic device, wherein the display data is used for displaying a collaborative interface of the first electronic device by the second electronic device;
receiving the coordinate data transmitted by the second electronic equipment;
and carrying out local screen capturing on the first electronic equipment based on the coordinate data.
2. The method of claim 1, wherein the locally screen capturing the first electronic device based on the coordinate data comprises:
converting the coordinate data into screen capturing coordinates based on a resolution of the first electronic device;
and carrying out local screen capturing on the first electronic equipment by using the screen capturing coordinates.
3. The method of claim 2, further comprising, prior to said locally capturing the first electronic device using the capture coordinates:
determining that the screen capture region of the screen capture coordinate representation includes an incomplete first object;
and correcting the screen capturing coordinates, wherein the screen capturing area represented by the corrected screen capturing coordinates comprises the complete first object.
4. A method according to any one of claims 1-3, wherein the method further comprises: and saving the image obtained by screen capturing.
5. The method of claim 4, wherein the first electronic device comprises a Media Store module;
the step of saving the captured image comprises the following steps: and calling the Media Store module to Store the image obtained by the screen capturing.
6. A screen capturing method, applied to a second electronic device, the method comprising:
establishing a first communication connection with a first electronic device;
displaying a collaborative interface of the first electronic device;
responding to a first operation of a user, and displaying a screen capturing area selection interface based on the collaborative interface;
in response to a second operation by the user, coordinate data representing the region of the partial screen capture is transmitted to the first electronic device.
7. The method of claim 6, wherein prior to transmitting the coordinate data to the first electronic device, the method further comprises:
acquiring coordinates of the region of the partial screen capture selected by the second operation;
and obtaining the coordinate data based on the ratio of the coordinates of the local screen capturing area to the resolution of the second electronic device.
8. The method of claim 7, wherein the obtaining coordinates of the region of the partial screen capture of the second operation selection comprises:
responding to the region selection operation of the region selection interface, and acquiring the coordinates of a first screen capturing region;
determining that the first screen capturing area comprises an incomplete first object, and displaying a second screen capturing area obtained by correcting the first screen capturing area, wherein the second screen capturing area comprises the complete first object;
and taking the coordinates of the second screen capturing area as the coordinates of the area of the partial screen capturing.
9. The method of claim 8, wherein taking the coordinates of the second screenshot region as the coordinates of the region of the partial screenshot comprises:
and responding to the operation of selecting the second screen capturing area by a user, and taking the coordinates of the second screen capturing area as the coordinates of the area of the partial screen capturing.
10. An electronic device, comprising:
one or more processors;
one or more memories;
the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the screen capture method of any of claims 1-5 or 6-9.
11. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, which when executed by a processor causes the processor to perform the screen capture method of any of claims 1-5 or any of claims 6-9.
CN202310511819.3A 2022-03-15 2022-03-15 Screen capturing method and device Pending CN116774870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511819.3A CN116774870A (en) 2022-03-15 2022-03-15 Screen capturing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310511819.3A CN116774870A (en) 2022-03-15 2022-03-15 Screen capturing method and device
CN202210251369.4A CN115562525B (en) 2022-03-15 2022-03-15 Screen capturing method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210251369.4A Division CN115562525B (en) 2022-03-15 2022-03-15 Screen capturing method and device

Publications (1)

Publication Number Publication Date
CN116774870A true CN116774870A (en) 2023-09-19

Family

ID=84736997

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310511819.3A Pending CN116774870A (en) 2022-03-15 2022-03-15 Screen capturing method and device
CN202210251369.4A Active CN115562525B (en) 2022-03-15 2022-03-15 Screen capturing method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210251369.4A Active CN115562525B (en) 2022-03-15 2022-03-15 Screen capturing method and device

Country Status (1)

Country Link
CN (2) CN116774870A (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708540A (en) * 2012-04-21 2012-10-03 上海量明科技发展有限公司 Method and client side for zooming screen capturing areas
KR101607072B1 (en) * 2014-02-24 2016-03-29 알서포트 주식회사 Mobile phone remote supporting method using screenshot
CN106873928A (en) * 2016-10-31 2017-06-20 深圳市金立通信设备有限公司 Long-range control method and terminal
CN115357178B (en) * 2019-08-29 2023-08-08 荣耀终端有限公司 Control method applied to screen-throwing scene and related equipment
CN111124220B (en) * 2019-11-20 2022-02-25 维沃移动通信有限公司 Screenshot method and electronic equipment
CN113723397B (en) * 2020-05-26 2023-07-25 华为技术有限公司 Screen capturing method and electronic equipment
CN113961157B (en) * 2020-07-21 2023-04-07 华为技术有限公司 Display interaction system, display method and equipment
CN113126862B (en) * 2021-03-15 2022-06-10 维沃移动通信有限公司 Screen capture method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN115562525B (en) 2023-06-13
CN115562525A (en) 2023-01-03

Similar Documents

Publication Publication Date Title
US11385857B2 (en) Method for displaying UI component and electronic device
US20220179455A1 (en) Display method and electronic device
EP3958533A1 (en) Method for accessing wireless local area network and terminal
EP4060475A1 (en) Multi-screen cooperation method and system, and electronic device
CN116360725B (en) Display interaction system, display method and device
US11947998B2 (en) Display method and device
US20230254575A1 (en) Camera use method, electronic device, and camera
KR20210011027A (en) Application function implementation method and electronic device
CN112130788A (en) Content sharing method and device
US20240069850A1 (en) Application Sharing Method, Electronic Device, and Storage Medium
CN116095881A (en) Multi-device cooperation method, electronic device and related products
EP4350540A1 (en) Account login method and electronic device
CN115562525B (en) Screen capturing method and device
WO2023029983A1 (en) Control content dragging method and system, and electronic device
CN116048955B (en) Test method and electronic equipment
EP4258099A1 (en) Double-channel screen projection method and electronic device
EP4293497A1 (en) Screen projection display method and electronic device
US20230087282A1 (en) Dual wi-fi connection method and electronic device
CN116722881A (en) Antenna tuning method and electronic equipment
CN116700655B (en) Interface display method and electronic equipment
EP4350502A1 (en) Display method and electronic device
WO2023011215A1 (en) Display method and electronic device
EP4239464A1 (en) Method for invoking capabilities of other devices, electronic device, and system
CN117931385A (en) Information interaction method, electronic equipment and cooperative work system
CN116679895A (en) Collaborative business scheduling method, electronic equipment and collaborative system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination