WO2021043171A1 - 一种屏幕截取方法及相关设备 - Google Patents

一种屏幕截取方法及相关设备 Download PDF

Info

Publication number
WO2021043171A1
WO2021043171A1 PCT/CN2020/113053 CN2020113053W WO2021043171A1 WO 2021043171 A1 WO2021043171 A1 WO 2021043171A1 CN 2020113053 W CN2020113053 W CN 2020113053W WO 2021043171 A1 WO2021043171 A1 WO 2021043171A1
Authority
WO
WIPO (PCT)
Prior art keywords
screenshot
screen
target
touch
split
Prior art date
Application number
PCT/CN2020/113053
Other languages
English (en)
French (fr)
Inventor
徐杰
华文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20861361.2A priority Critical patent/EP4024188A4/en
Priority to US17/640,486 priority patent/US11922005B2/en
Priority to JP2022514716A priority patent/JP7385008B2/ja
Priority to CN202080059571.6A priority patent/CN114270302A/zh
Publication of WO2021043171A1 publication Critical patent/WO2021043171A1/zh
Priority to JP2023191427A priority patent/JP2024020334A/ja

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04104Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • This application relates to the technical field of electronic equipment, and in particular to a screen capture method and related equipment.
  • this application is proposed to provide a screen capture method and related equipment that overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
  • an embodiment of the present application provides a screen capture method, which may include:
  • the first touch operation is a movement operation in which multiple touch points move within the first screen by greater than or equal to a first preset distance threshold.
  • the first screen includes N split screens, where N Is a positive integer greater than 1;
  • the current display content in the target sub-screen is captured as the first screenshot.
  • the embodiment of the present application can determine the first touch operation from the received multiple touch operations, and determine whether the starting positions of the multiple touch points in the first touch operation are moved All are in the target split screen of the first screen. If it is, the current display content in the target split screen will be captured as the first screenshot. Therefore, after judging that the starting positions of multiple touch points are all in the same screen, you can directly obtain individual touch points in multiple windows.
  • a screenshot of a screen can achieve the purpose of quickly and flexibly obtaining a screenshot of the target split-screen area, enabling users to obtain a screenshot in a short time without having to put it into the picture editing software to manually remove other split screens from the screenshot. The displayed information simplifies the user screenshot operation.
  • the first screenshot is a thumbnail; if the starting positions of the multiple touch points when moving are all within the target split screen of the first screen, the method further includes: The current display contents of the above N split screens in one screen are respectively captured as corresponding screenshots; a second screenshot is generated.
  • the second screenshot includes the screenshots corresponding to the above N split screens.
  • the above screenshots are displayed in the above first with the corresponding split screens.
  • the distribution modes in the screen are arranged in the above-mentioned second screenshot, and the screenshots corresponding to the above-mentioned N split screens are all thumbnails that are allowed to receive touch operations respectively.
  • the electronic device first determines whether the starting positions of the multiple touch points when moving in the first touch operation are all within the target split screen of the first screen. If so, not only the target split screen To capture the current display content as the first screenshot, it is also necessary to capture the current display content of the aforementioned N split screens in the first screen as corresponding screenshots, and generate a second screenshot.
  • the screenshots corresponding to the N split screens in the second screenshot can receive touch operations separately, that is, the screenshots corresponding to the N split screens contained in the second screenshot are independent of each other. There is no connection relationship between them, and the distribution mode of the screenshots corresponding to the N split screens is the same as the distribution mode of the corresponding split screens in the first screen.
  • the first screen has a first split screen and a second split screen.
  • the screenshot of the first split screen After receiving the user's first touch operation on the first split screen, take the screenshot of the first split screen to obtain the corresponding first screenshot, and then After the first split screen and the second split screen are screenshots respectively, a corresponding second screenshot is generated, wherein the first screenshot and the second screenshot are both thumbnails, which are suspended on the first screen.
  • taking screenshots of all the split screens can prevent electronic devices from misjudging the user's operation intention, or misjudging the target split screen caused by the user's misoperation, which improves the accuracy of screenshots and the effectiveness of the screenshot. user experience.
  • the above-mentioned first screenshot is a thumbnail; the above method further includes: receiving a knuckle screenshot operation or a button combination pressing screenshot operation; according to the above knuckle screenshot operation instruction or the above button combination pressing screenshot operation instruction, Capture the current display content of all the split screens in the first screen as corresponding screenshots; generate a second screenshot, the second screenshot includes the screenshots corresponding to the above N split screens, the above screenshots are in the above first screen with the corresponding split screens
  • the distribution modes in one screen are arranged in the above second screenshot, and each screenshot corresponding to the above N split screens is a thumbnail that is allowed to receive touch operations respectively.
  • the second screenshot generated by the embodiment of the present application includes the screenshot thumbnail corresponding to the split screen, and the screenshot thumbnails corresponding to the multiple split screens are independent of each other, and each split screen corresponds to a zoom Thumbnails can be individually touched to allow users to share and edit. It also allows users to edit and crop the entire screenshot separately to obtain a single split-screen screenshot without entering the image editing software, which greatly improves the user's screenshot experience. Simplify the screenshot operation process.
  • the above method further includes: receiving a second touch operation, where the second touch operation is a click operation on the target screenshot thumbnail in the first screenshot or the second screenshot, and the target screenshot
  • the thumbnail is at least one of the screenshot thumbnails corresponding to the N split screens in the first screenshot or the second screenshot; according to the second touch operation, the target screenshot thumbnail is saved to the gallery, and the first screenshot is deleted.
  • the second touch operation may be used to select the target screenshot thumbnail in the first screenshot or the second screenshot to determine the target screenshot thumbnail that the user wants to keep among the multiple screenshots.
  • the target screenshot thumbnail can be Screenshots corresponding to multiple split screens.
  • the realization of the second operation can prevent the electronic device from misjudging the user's first operation; or, when receiving a knuckle screenshot operation instruction or a button combination press screenshot operation instruction, and want to capture a single split screen, there is no need to use the picture editing software Perform secondary editing in, which simplifies the user screenshot operation.
  • the above method further includes: receiving a drag operation on the target screenshot thumbnail, where the drag operation is an operation of moving the target screenshot thumbnail through at least one touch point;
  • the drag operation controls the sharing or insertion of the target screenshot thumbnail to the split screen area where the ending position of the drag operation is located.
  • the target split screen is shared to the application included in any one of the N split screens in the first screen, you can directly drag and drop the target screenshot thumbnail. The drag operation is realized.
  • the target screenshot thumbnail can be controlled to be shared or inserted into the split screen area where the end position of the drag operation is moved, which simplifies the prior art, if you want to shrink the target screenshot
  • the operation of selecting the target application is also required. Compared with the method of directly dragging and dropping to the target application, it simplifies the sharing process of the target screenshot thumbnail and improves Improve the user experience.
  • the above method further includes: if within a first time period after receiving any one of the first touch operation, the knuckle screenshot operation instruction, or the button combination pressing the screenshot operation instruction , If the second touch operation is not received, all the screenshots in the second screenshot are spliced into a picture, and the first screenshot is saved to the gallery. Therefore, in this embodiment of the application, after receiving the user After the screenshot instruction is issued, if other operations on the screenshot obtained according to the screenshot instruction are not received again within a period of time, the obtained screenshot can be directly saved, so that subsequent users do not need to take another screenshot to waste resources when they use it again.
  • the screenshot instruction may include any one of the above-mentioned first touch operation, the above-mentioned knuckle screenshot operation instruction, or the above-mentioned key combination pressing screenshot operation instruction.
  • the above method further includes: if the starting positions of the multiple touch points when moving are not uniform within the target sub-screen of the first screen, changing the current display content in the first screen Take a screenshot as the third screenshot and save it to the above gallery.
  • the electronic device first determines whether the starting positions of the multiple touch points when moving in the first touch operation are all within the target sub-screen of the first screen, if not, that is, the first touch operation When the starting position of the multiple touch points when moving is within the area of at least two split screens, the current display content of all the split screens in the first screen is captured as the third screenshot.
  • the user needs to take a screenshot of the displayed content on the entire screen, he only needs to move the starting position of multiple touch points within at least two split-screen areas to take a screenshot of the first screen, avoiding The cumbersome screen capture operation and the problem of long user screen capture time improve the user's screen capture experience.
  • the above method further includes: if the multiple touch points move within the target split screen by a distance greater than or equal to a second preset distance threshold, the second preset distance If the threshold is greater than the first preset distance threshold, and the ratio between the second preset distance threshold and the target split screen height is greater than the preset ratio threshold, then a long screen capture operation is performed on the split screen area of the target split screen; or If the first touch operation for the target split screen is received again within the second time period after the first touch operation is received, the long screen capture operation is performed on the split screen area of the target split screen Or, if the first touch operation includes four touch points, the long screen capture operation is performed on the split screen area of the target split screen.
  • the electronic device may perform a long screen capture operation on the target split screen in the first screen, and the target split screen may be any one of the above N split screens. It is understandable that the operation modes for triggering the long screen capture command are different, and the results after the long screen capture operation are executed are also different. Multiple touch modes perform long screen capture operations, avoiding cumbersome screen capture operations and long user screenshots. Problem, to improve the user's screen capture experience.
  • an embodiment of the present application provides an electronic device, the electronic device including: one or more processors, a memory, and one or more keys;
  • the memory, the display screen, and one or more keys are coupled with one or more processors.
  • the memory is used to store computer program codes.
  • the computer program codes include computer instructions.
  • One or more processors execute the computer instructions to execute:
  • the first touch operation is a movement operation in which multiple touch points move within the first screen by greater than or equal to a first preset distance threshold.
  • the first screen includes N split screens, where N Is a positive integer greater than 1;
  • the current display content in the target sub-screen is captured as the first screenshot.
  • the first screenshot is a thumbnail; this is also used to: if the starting positions of the multiple touch points when moving are all within the target sub-screen of the first screen, the first screenshot is The current display contents of the above N split screens in one screen are respectively captured as corresponding screenshots; a second screenshot is generated.
  • the second screenshot includes the screenshots corresponding to the above N split screens.
  • the above screenshots are displayed in the above first with the corresponding split screens.
  • the distribution modes in the screen are arranged in the above-mentioned second screenshot, and the screenshots corresponding to the above-mentioned N split screens are all thumbnails that are allowed to receive touch operations respectively.
  • the above first screenshot is a thumbnail; this is also used to: receive a knuckle screenshot operation or a button combination press screenshot operation; according to the above knuckle screenshot operation instruction or the above button combination press screenshot operation instruction, Capture the current display content of all the split screens in the first screen as corresponding screenshots; generate a second screenshot, the second screenshot includes the screenshots corresponding to the above N split screens, the above screenshots are in the above first screen with the corresponding split screens
  • the distribution modes in one screen are arranged in the above second screenshot, and each screenshot corresponding to the above N split screens is a thumbnail that is allowed to receive touch operations respectively.
  • this is also used to: receive a second touch operation, the second touch operation being a click operation on the target screenshot thumbnail in the first screenshot or the second screenshot, the target screenshot
  • the thumbnail is at least one of the screenshot thumbnails corresponding to the N split screens in the first screenshot or the second screenshot; according to the second touch operation, the target screenshot thumbnail is saved to the gallery, and the first screenshot is deleted.
  • this is also used to: receive a drag operation on the target screenshot thumbnail, where the drag operation is an operation of moving the target screenshot thumbnail through at least one touch point;
  • the drag operation controls the sharing or insertion of the target screenshot thumbnail to the split screen area where the ending position of the drag operation is located.
  • this is also used to: if any one of the above-mentioned first touch operation, the above-mentioned knuckle screenshot operation instruction, or the above-mentioned key combination presses the screenshot operation instruction is received for the first time period If the above-mentioned second touch operation is not received, all the screenshots in the above-mentioned second screenshot are spliced into one image, and the above-mentioned first screenshot is saved to the above-mentioned gallery.
  • this is also used to: if the starting positions of the multiple touch points when moving are not uniform within the target sub-screen of the first screen, change the current display content in the first screen Take a screenshot as the third screenshot and save it to the above gallery.
  • this is also used to: if the multiple touch points move within the target split screen by a distance greater than or equal to a second preset distance threshold, the second preset distance If the threshold is greater than the first preset distance threshold, and the ratio between the second preset distance threshold and the target split screen height is greater than the preset ratio threshold, then a long screen capture operation is performed on the split screen area of the target split screen; or If the first touch operation for the target split screen is received again within the second time period after the first touch operation is received, the long screen capture operation is performed on the split screen area of the target split screen Or, if the first touch operation includes four touch points, the long screen capture operation is performed on the split screen area of the target split screen.
  • an embodiment of the present application provides a computer storage medium, including computer instructions, which when the computer instruction is executed on an electronic device, cause the electronic device to execute the first aspect or any one of the first aspects of the embodiments of the present application
  • the screen capture method provided by the implementation method.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the first aspect or any implementation manner of the first aspect of the embodiments of the present application Provide screen capture method.
  • the electronic equipment provided in the second aspect, the computer storage medium provided in the third aspect, and the computer program product provided in the fourth aspect can be used to execute the screen capture method provided in the first aspect. Therefore, for the beneficial effects that can be achieved, refer to the beneficial effects in the screen capture method provided in the first aspect, which will not be repeated here.
  • FIG. 1A is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • FIG. 1B is a software structure block diagram of an electronic device 100 provided by an embodiment of the present application.
  • Fig. 2A is a schematic diagram of a multi-window user interface provided by an embodiment of the present application.
  • FIG. 2B is a schematic diagram of a user operation when recognizing a first touch operation according to an embodiment of the present application.
  • FIG. 2C is a schematic diagram of a comparison relationship between a determined moving distance and a first preset distance threshold according to an embodiment of the present application.
  • FIG. 2D is a user interface provided by an embodiment of the present application for capturing the displayed content of the target split screen as a picture through a first touch operation.
  • FIG. 2E is another user interface provided by an embodiment of the present application for capturing the display content of the target split screen as a picture through the first touch operation.
  • 2F to 2H are a set of user interfaces for confirming, saving, and sharing screenshot thumbnails provided by an embodiment of the present application.
  • Figures 2I to 2K are another set of user interfaces for confirming and saving screenshot thumbnails provided by an embodiment of the present application.
  • FIG. 3A is a schematic diagram of another multi-window user interface provided by an embodiment of the present application.
  • FIG. 3B is another schematic diagram of a user operation when recognizing a first touch operation according to an embodiment of the present application.
  • FIG. 3C and FIG. 3D are a set of user interfaces provided by an embodiment of the present application in which the displayed content on the first screen is captured as a picture through a first touch operation and shared.
  • FIG. 3E is another set of user interfaces provided by an embodiment of the present application for capturing the displayed content on the first screen as a picture through a first touch operation.
  • FIG. 4A is a schematic diagram of another multi-window user interface provided by an embodiment of the present application.
  • 4B and 4C are schematic diagrams of a group of long screenshot user interfaces provided by an embodiment of the present application.
  • 4D and 4E are schematic diagrams of user interfaces of another group leader taking screenshots according to an embodiment of the present application.
  • 4F and 4G are schematic diagrams of a multi-window user interface provided by an embodiment of the present application.
  • FIG. 5A is a schematic flowchart of a screen capture method provided by an embodiment of the present application.
  • FIG. 5B is a schematic flowchart of another screen capture method provided by an embodiment of the present application.
  • FIG. 5C is a schematic diagram of an operation based on the aforementioned long screenshot of the electronic device 100 provided by an embodiment of the present application.
  • FIG. 5D is a schematic diagram of a user interface of a group of screenshots of a single screen provided in an embodiment of the present application and shared in the current split screen.
  • FIG. 5E is a schematic diagram of a user interface of a group of screenshots of the entire screen provided in an embodiment of the present application and shared in the current split screen.
  • FIG. 5F is a schematic diagram of another user interface for taking screenshots of a single screen and sharing to the current split screen provided in an embodiment of the present application.
  • Fig. 5G is a schematic diagram of another set of user interface screenshots of a single screen and shared to the current split screen provided in an embodiment of the present application.
  • FIG. 5H is a schematic diagram of another set of user interfaces for capturing the entire screen and sharing it in the current split screen provided in an embodiment of the present application.
  • FIG. 5I is a schematic diagram of a user interface in an application that captures a single screen and shares it to an application included in the current interface in a group of practical applications provided in an embodiment of the present application.
  • component used in this specification are used to denote computer-related entities, hardware, firmware, a combination of hardware and software, software, or software in execution.
  • the component may be, but is not limited to, a process, a processor, an object, an executable file, an execution thread, a program, and/or a computer running on a processor.
  • the application running on the computing device and the computing device can be components.
  • One or more components may reside in processes and/or threads of execution, and components may be located on one computer and/or distributed among two or more computers.
  • these components can be executed from various computer readable media having various data structures stored thereon.
  • the component can be based on, for example, a signal having one or more data packets (e.g. data from two components interacting with another component in a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through a signal) Communicate through local and/or remote processes.
  • a signal having one or more data packets (e.g. data from two components interacting with another component in a local system, a distributed system, and/or a network, such as the Internet that interacts with other systems through a signal) Communicate through local and/or remote processes.
  • UI user interface
  • the term "user interface (UI)" in the description, claims and drawings of this application is a medium interface for interaction and information exchange between applications or operating systems and users, which implements the internal form of information And the user can accept the conversion between the forms.
  • the user interface of the application is the source code written in a specific computer language such as java, extensible markup language (XML), etc.
  • the interface source code is parsed and rendered on the terminal device, and finally presented as content that can be recognized by the user.
  • Control also called widget, is the basic element of the user interface. Typical controls include toolbar, menu bar, text box, button, and scroll bar. (scrollbar), pictures and text.
  • the attributes and content of the controls in the interface are defined by tags or nodes.
  • XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • a node corresponds to a control or attribute in the interface. After the node is parsed and rendered, it is presented as user-visible content.
  • applications such as hybrid applications, usually include web pages in their interfaces.
  • a web page, also called a page, can be understood as a special control embedded in the application program interface.
  • the web page is source code written in a specific computer language, such as hypertext markup language (GTML), cascading style Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
  • web page source code can be loaded and displayed as user-recognizable content by a browser or a web page display component with similar functions to the browser.
  • the specific content contained in a web page is also defined by tags or nodes in the source code of the web page.
  • GTML uses ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas> to define the elements and attributes of the web page.
  • GUI graphical user interface
  • the commonly used form of the user interface is a graphical user interface (GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It can be an icon, window, control and other interface elements displayed on the display screen of an electronic device.
  • the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. Visual interface elements.
  • the window is the most important part of the user interface. It is the rectangular area on the screen corresponding to an application, including the frame and client area. It is the visual interface between the user and the application that generated the window. . Whenever the user starts to run an application, the application creates and displays a window; when the user manipulates the objects in the window, the program responds accordingly. The user terminates the running of a program by closing a window; selects the corresponding application by selecting the corresponding application window.
  • a touch sensor is a device that captures and records physical touches or hugs on devices and/or objects. It enables a device or object to detect touches, usually by a human user or operator.
  • the touch sensor may also be called a touch detector.
  • FIG. 1A is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, and a universal serial bus (universal serial bus).
  • serial bus universal serial bus
  • USB universal serial bus
  • charging management module 140 power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, Earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous transmitter receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, and a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA, CTIA
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers and so on.
  • Distance sensor 180F used to measure distance.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • LED light emitting diode
  • photodiode a light detector
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the display screen 194 can provide visual output related to the touch operation.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations that act on different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 by way of example. Please refer to FIG. 1B.
  • FIG. 1B is a software structure block diagram of an electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, G.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the software system shown in Figure 1B involves application presentations that use sharing capabilities (such as gallery and file manager), instant sharing modules that provide sharing capabilities, and print services and print spooler that provide printing capabilities.
  • application framework layer provides printing framework, WLAN service, Bluetooth service, and the core and bottom layer provide WLAN Bluetooth capabilities and basic communication protocols.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes the touch operation into the original input event (including touch coordinates, time stamp of the touch operation, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch operation, and the control corresponding to the touch operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer, and passes the 3D camera model Group 193 captures still images or videos.
  • the first screen mentioned in the embodiment of the present application can be understood as a user interface in the electronic device of the embodiment of the present application.
  • Scenario One capture the application scenario of a single split screen. (The starting positions of the multiple touch points when moving are all within the target sub-screen of the first screen).
  • FIG. 2A is a schematic diagram of a multi-window user interface provided by an embodiment of the present application.
  • the user interface includes a WeChat window 201, a Didi taxi window 201, and a news push window 203. Not limited to this, the user interface may also include other more or fewer windows.
  • Other applications may be instant messaging software such as QQ or MSN; or entertainment video software such as iQiyi, Tencent Video, Youku Video, etc.; It can also be a tool-assisted application that comes with electronic device systems such as calculators, calendars, and settings.
  • the user interface 20 may include: a window display area 201, a window display area 202, a window display area 203, a window size control 205, and a status bar 206.
  • the electronic device used by the local user is the electronic device 100. among them:
  • the window display areas 201-203 are used to respectively display the contents of different applications in the electronic device, and can accept and respond to the user's touch operation in the window display areas 201-203.
  • the window size control 206 is used to adjust the proportion of the window in the user interface.
  • the electronic device 100 can detect a touch operation (for example, a drag operation on the window size control 206) acting on the window size control 206, and in response to the operation, the electronic device 100 can move the position of the window size control 206 to The size of the current window 202 and 203 can be adjusted.
  • a touch operation for example, a drag operation on the window size control 206
  • the status bar 207 may include: operator indicators (for example, the operator’s name "China Mobile"), one or more signal strength indicators of wireless fidelity (wireless fidelity, Wi-Fi) signals, and mobile communication signals (also Called cellular signal) one or more signal strength indicators, time indicators, battery status indicators, etc.
  • operator indicators for example, the operator’s name "China Mobile”
  • signal strength indicators of wireless fidelity wireless fidelity, Wi-Fi
  • mobile communication signals also Called cellular signal
  • the user is divided into three sub-screen areas in the first screen of the electronic device.
  • the window display area 201 is used to display the WeChat main interface
  • the window display area 202 is used to display the Didi taxi interface
  • the window display area 203 Used to display the news push interface. If the user only needs to share Didi Taxi to WeChat contacts without including WeChat and news feed interface, in the prior art, it is necessary to take screenshots of all displayed content on the first screen at this time, and then put them into the image editor for cropping , To obtain a picture that only keeps the Didi Taxi interface. This operation may result in cumbersome screenshot operations, longer picture editing time, and poor screenshot experience.
  • the electronic device 100 can determine the first touch operation by receiving the touch operation; Then, according to the determined starting positions of the multiple touch points included in the first touch operation, the target split screen is determined; finally, the target split screen is captured and shared.
  • FIG. 2B is a schematic diagram of a user operation when recognizing a first touch operation according to an embodiment of the present application.
  • the electronic device 100 can detect the touch operation of the user through the touch sensor 180K (for example, the touch sensor 180K recognizes the movement operation made by the user on multiple touch points on the window display area 202), and responds to With this touch operation, the electronic device 100 can recognize the movement track of the movement operation.
  • the touch sensor 180K recognizes the movement operation made by the user on multiple touch points on the window display area 202
  • the touch sensor 180K detects the user's touch operation
  • the dwell time of different touch operations, the movement trajectory, and the number of touch points for example: three touch points and four touch points
  • the start position and end position of each touch point are in the positional relationship between the split screens, etc., which may produce different optical and/or acoustic effects, and generate corresponding signals (including the pressure value generated by the mobile operation on the electronic device). Wait).
  • the signals generated by different touch operations can be captured by sensors of the electronic device 100 (for example, touch sensors, acceleration sensors, impact sensors, vibration sensors, acoustic sensors, displacement sensors, speed sensors, etc.). Please refer to FIG. 2C. FIG.
  • FIG. 2C is a schematic diagram of a comparison relationship between a determined movement distance and a first preset distance threshold according to an embodiment of the present application.
  • the touch sensor 180K shown in FIG. 2C detects a user's touch
  • the movement distance of the operation is to determine whether the movement distance is greater than the first preset distance threshold. If the movement distance is greater than the first preset distance threshold, the electronic device 100 may determine that the touch operation is the first touch operation.
  • the electronic device 100 can distinguish the specific touch operation used by the user through the captured signal, and then determine that the touch sensor 180K detects the user touch operation (that is, the multiple touch points move more than or Whether the movement operation equal to the first preset distance threshold) is the first touch operation; if it is determined that the touch sensor 180K detects that the user's touch operation is the first touch operation, then it is determined whether the first touch operation includes multiple touch operations. Whether the starting positions of the touch points are all in the window display area 202, if so, the current display content in the window display area 202 is captured as the first screenshot.
  • the electronic device 100 can instruct the touch sensor 180K to detect that the user's touch operation is the first through vibration and/or user interface identification (for example, the movement track in the touch operation is bright, the boundary is thickened, or there is a ghost, etc.) Touch operation.
  • the touch sensor 180K can detect that the user's touch operation is the first through vibration and/or user interface identification (for example, the movement track in the touch operation is bright, the boundary is thickened, or there is a ghost, etc.) Touch operation.
  • the second preset distance threshold shown in FIG. 2C is used to determine whether to perform a long screen capture operation on the target split screen, wherein the second preset distance threshold is greater than the first preset distance threshold, And the ratio between the second preset distance threshold and the target split-screen height is greater than the preset proportion threshold, where the target split-screen height can be understood as being in contact with the multiple touch points in the The side length of the target split screen that moves in the same direction when moving within the target split screen.
  • the electronic device 100 may also recognize the user's operation on the first screen through an infrared sensor, which is not limited in the embodiment of the present application.
  • FIG. 2D is a user interface provided by an embodiment of the present application for capturing the display content of the target split screen as a picture through a first touch operation.
  • the electronic device 100 in the embodiment of the present application further includes: a picture switching control 204, which is used to display a screenshot of the user interface or a certain part of the user interface when the screen is taken, and can also be used to click to share pictures.
  • the electronic device 100 can detect a touch operation (such as a drag operation on the picture switch control 307) acting on the picture switch control 307, and in response to the operation, the electronic device 100 can control the aforementioned image sharing or insertion into the aforementioned drag operation In the window where the end position of the movement is located.
  • the electronic device 100 can recognize whether the user operation includes multiple touch points that move more than or For a movement operation equal to the first preset distance threshold, determine the starting position of the multiple touch points in the user touch operation detected by the touch sensor 180K, so as to determine that the user operation is to capture the target split screen
  • the current display content is still to intercept the current display content of the entire user interface, and then according to the touch sensor 180K detected multiple touch points when the starting positions are all within the window display area 202, then the touch sensor 180K detects
  • the current display content in the screenshot window display area 202 corresponding to the user operation is the first screenshot.
  • FIG. 2E is another user interface provided by an embodiment of the present application that captures the display content of the target split screen as a picture through a first touch operation.
  • the electronic device 100 can recognize whether the user operation includes multiple touch points that move more than or For a movement operation equal to the first preset distance threshold, determine the starting position of the multiple touch points in the user touch operation detected by the touch sensor 180K, so as to determine that the user operation is to capture the target split screen
  • the current display content is to intercept the current display content of the entire user interface, and then the starting positions of the multiple touch points detected by the touch sensor 180K are all in the window display area 202, not only according to the touch sensor 180K detection
  • the current display content in the window display area 202 obtained by the user operation captured is the first screenshot.
  • the current display contents of the N split screens in the screen are respectively captured as corresponding screenshots, and a second screenshot is generated.
  • the second screenshot includes the screenshots corresponding to the N split screens, and the screenshots are displayed in the corresponding split screens.
  • the distribution modes in the first screen are arranged in the second screenshot, and the screenshots corresponding to the N split screens are all thumbnails that are allowed to receive touch operations respectively. It is understandable that when the start positions of the detected multiple touch points are all in the window display area 202, it is not only necessary to capture the current display content of Didi Dash in the window display area 202, but also to capture separately.
  • the current display content of WeChat in the window display area 201 The current display content of WeChat in the window display area 201, the current display content of Didi Taxi in the window display area 202, and the current display content of the news push in the window display area 203.
  • the window display area 201 in the second screenshot There is no correlation between the thumbnail corresponding to the current display content of WeChat, the thumbnail corresponding to the current display content of Didi Dache in the window display area 202, and the thumbnail batch corresponding to the current display content of the news feed in the window display area 203.
  • the user’s touch operation can be received separately.
  • taking screenshots of all the split screens can prevent the electronic device from misjudging the user's operation intention, or The wrong judgment target split screen caused by the user's misoperation improves the accuracy of screenshots and the user experience of screenshots.
  • FIG. 2F to FIG. 2H are a set of user interfaces for confirming, saving, and sharing screenshot thumbnails provided by an embodiment of the present application.
  • the electronic device 100 receives a user operation acting on the touch sensor 180K, and the electronic device 100 can identify whether the user operation is a second touch operation
  • the second touch operation is a click operation on the target screenshot thumbnail in the first screenshot or the second screenshot, where the target screenshot thumbnail is the target screenshot in the first screenshot or the second screenshot.
  • At least one of the screenshot thumbnails corresponding to the N split screens, for example, the target screenshot thumbnail is the thumbnail corresponding to the current display content of Didi Dache in the window display area 202.
  • the electronic device 100 saves the target screenshot thumbnail to a gallery according to the second touch operation, and deletes all screenshots in the first screenshot or the second screenshot except the target screenshot thumbnail.
  • a drag operation on the thumbnail corresponding to the currently displayed content of Didi Dache in the window display area 202 is received, and the drag operation is to move Didi Dache in the window display area 202 through a touch point.
  • the thumbnail corresponding to the currently displayed content is moved; according to the drag operation, the thumbnail corresponding to the currently displayed content of Didi Dache in the window display area 202 is controlled to share or insert into the WeChat application in the window display area 201.
  • FIG. 2I to FIG. 2K are another set of user interfaces for confirming and saving screenshot thumbnails provided by an embodiment of the present application.
  • the electronic device 100 receives a user operation acting on the touch sensor 180K, and the electronic device 100 can identify whether the user operation is a second touch operation
  • the second touch operation is a click operation on the target screenshot thumbnail in the first screenshot or the second screenshot, wherein there are multiple target screenshot thumbnails.
  • the electronic device 100 can control the multiple screenshots to be spliced into one screenshot according to the second touch operation. Because the shapes of the multiple screenshots spliced together are irregular, the electronic device can automatically supplement the missing parts to make the spliced
  • the geometry of the screenshot is regular. As shown in FIG.
  • the electronic device may also determine the first touch operation from the multiple touch operations received by the touch sensor 180K, and determine whether the starting position of the multiple touch points in the first touch operation is moved All are in the target split screen of the first screen. If it is, the current display content in the target split screen will be captured as the first screenshot. Therefore, after judging that the starting positions of multiple touch points are all in the same screen, you can directly obtain individual touch points in multiple windows.
  • a screenshot of a screen can achieve the purpose of quickly and flexibly obtaining a screenshot of the target split-screen area, enabling users to obtain a screenshot in a short period of time, without the need to capture the entire display screen first, the screenshot of the entire screen will be obtained Put it into the picture editing software to manually remove the information displayed on other screens in the screenshot picture, which simplifies the user screenshot operation.
  • Scenario 2 Capture the application scenario of the entire screen (the starting positions of multiple touch points when they move are not all within the target sub-screen of the first screen).
  • FIG. 3A is a schematic diagram of another multi-window user interface provided by an embodiment of the present application.
  • the user interface includes a QQ window 301, a Didi taxi window 302, and a news push window 303. Not limited to this, the user interface can also include other more or less windows.
  • Other applications can be instant messaging software such as WeChat or MSN; or entertainment video software such as iQiyi, Tencent Video, Youku Video, etc.; It can also be a tool-assisted application that comes with electronic device systems such as calculators, calendars, and settings.
  • the user interface 30 may include: a window display area 301, a window display area 302, a window display area 303, a window size control 305, and a status bar 306.
  • the electronic device used by the local user is the electronic device 100. among them:
  • the window display areas 301-303 are used to respectively display the contents of different applications (or the electronic device 100) in the electronic device, and can accept and respond to the user's touch operation in the window display areas 301-303.
  • the window size control 305 is used to adjust the proportion of the window in the user interface.
  • the electronic device 100 can detect a touch operation (such as a drag operation on the window size control 306) acting on the window size control 306, and in response to the operation, the electronic device 100 can adjust the size between the current window 302 and 303 Make adjustments.
  • the status bar 306 may include: operator indicators (for example, the operator’s name "China Mobile"), one or more signal strength indicators of wireless fidelity (wireless fidelity, Wi-Fi) signals, and mobile communication signals (also Called cellular signal) one or more signal strength indicators, time indicators, and battery status indicators.
  • operator indicators for example, the operator’s name "China Mobile”
  • signal strength indicators of wireless fidelity wireless fidelity, Wi-Fi
  • mobile communication signals also Called cellular signal
  • the user is divided into three sub-screen areas in the first screen of the electronic device.
  • the window display area 301 is used to display the QQ main interface
  • the window display area 303 is used to display the Didi taxi interface
  • the window display area 303 Used to display the news push interface. If the user needs to share the entire screen to QQ contacts, including both QQ and news push interface, in addition to the prior art, combined button screenshot or knuckle control screenshot, it can also be performed through the first touch operation.
  • the starting positions of the multiple touch points of the first touch operation when they move are in multiple split-screen windows, for example, in the QQ window and the Didi taxi window at the same time.
  • the electronic device 100 may receive a touch operation to determine The first touch operation; and then according to the determined starting positions of the multiple touch points included in the first touch operation, it is determined to intercept the user interface 30 to obtain a third screenshot; finally, the third screenshot is saved and shared.
  • FIG. 3B is another schematic diagram of a user operation when recognizing a first touch operation according to an embodiment of the present application.
  • the electronic device 100 shown in FIG. 3B further includes: a picture switching control 304, which is used to display a screenshot of the electronic device 100 when taking a screenshot, and can also be used to click to share a picture.
  • the electronic device 100 can detect a touch operation (such as a drag operation on the picture switch control 307) acting on the picture switch control 307, and in response to the operation, the electronic device 100 can control the aforementioned image sharing or insertion into the aforementioned drag operation In the window where the end position of the movement is located.
  • a touch operation such as a drag operation on the picture switch control 307
  • the electronic device 100 can detect the touch operation of the user through the touch sensor 180K (for example, the touch sensor 180K recognizes the movement operation made by the user on multiple touch points on the window display area 202), and responds to With this touch operation, the electronic device 100 can recognize the movement track of the movement operation.
  • the touch sensor 180K recognizes the movement operation made by the user on multiple touch points on the window display area 202
  • the touch sensor 180K detects the user's touch operation
  • the dwell time of different touch operations, the movement trajectory, and the number of touch points for example: three touch points and four touch points, more
  • the start position and end position of each touch point are in the positional relationship between the split screens, etc., which may produce different optical and/or acoustic effects, and generate corresponding signals (including the pressure value generated by the mobile operation on the electronic device). Wait).
  • the signals generated by different touch operations can be captured by sensors of the electronic device 100 (for example, touch sensors, acceleration sensors, impact sensors, vibration sensors, acoustic sensors, displacement sensors, speed sensors, etc.).
  • the touch sensor 180K detects the movement distance of the user's touch operation, and determines whether the movement distance is greater than the first preset distance threshold. If the movement distance is greater than the first preset distance threshold, the electronic device 100 can determine the touch The control operation is the first touch operation. Therefore, the electronic device 100 can distinguish the specific touch operation used by the user through the captured signal, and then determine that the touch sensor 180K detects the user touch operation (that is, the multiple touch points move more than or Whether the movement operation equal to the first preset distance threshold) is the first touch operation; if it is determined that the touch sensor 180K detects that the user's touch operation is the first touch operation, then it is determined whether the first touch operation includes multiple touch operations.
  • the electronic device 100 can instruct the touch sensor 180K to detect that the user's touch operation is the first through vibration and/or user interface identification (for example, the movement track in the touch operation is bright, the boundary is thickened, or there is a ghost, etc.) Touch operation.
  • the electronic device 100 may also recognize the user's operation on the first screen through an infrared sensor, which is not limited in the embodiment of the present application.
  • FIG. 3C and FIG. 3D are a set of user interfaces provided by an embodiment of the present application for capturing and sharing the displayed content on the first screen as a picture through a first touch operation.
  • the electronic device 100 can respond to a user operation acting on the touch sensor 180K in the embodiment corresponding to FIG. 2B, and the electronic device 100 can recognize whether the user operation includes multiple touch points in the user interface.
  • a movement operation that is greater than or equal to the first preset distance threshold is determined, and the starting position of the multiple touch points in the user's touch operation detected by the touch sensor 180K is determined, so as to determine that the user operation is to intercept the
  • the current display content in the target split screen is still to capture the current display content of the entire user interface, and then move according to the multiple touch points detected by the touch sensor 180K when the starting positions are not uniform in the window display area 202, that is, more
  • the starting position of the motion track corresponding to each touch point is in multiple split screens, and the current display content in the user interface 20 corresponding to the user operation detected by the touch sensor 180K is captured as the third screenshot.
  • Fig. 3D is a user interface for confirming, saving and sharing screenshot thumbnails provided by an embodiment of the present application.
  • Fig. 3D is a user interface for confirming, saving and sharing screenshot thumbnails provided by an embodiment of the present application.
  • FIG. 3E is another set of user interfaces provided by an embodiment of the present application that captures the displayed content on the first screen as a picture through a first touch operation.
  • the touch sensor 180K detects that the first touch operation, that is, the starting point of the three-finger slide across multiple windows, the electronic device 100 will take a screenshot of the content of the multi-window. For example, if the starting position as shown in the (1) picture in Figure 3E crosses application A and application B, the electronic device 100 will take a screenshot of the current display content of all the split screens in the first screen, that is, the application A and Apply B to take a screenshot.
  • the third screenshot after the entire screenshot is obtained as shown in the (2) picture in Fig. 3E.
  • the electronic device first determines whether the starting positions of the multiple touch points when moving in the first touch operation are all within the target sub-screen of the first screen, if not, that is, during the first touch operation
  • the starting position of multiple touch points is within the area of at least two split screens
  • the current display content of all the split screens in the first screen will be captured as the third screenshot. Therefore, if the user needs to capture the entire screen When displaying content on the screen, you only need to move the starting position of multiple touch points within at least two split screen areas to take a screenshot of the first screen, avoiding tedious screenshot operations and user screenshot time The longer issue improves the user's screen capture experience.
  • Scenario 3 The application scenario of taking a long screenshot.
  • FIG. 4A is a schematic diagram of another multi-window user interface provided by an embodiment of the present application.
  • the user interface includes a WeChat window 401, a Didi taxi window 402, and a news push window 403. Not limited to this, the user interface may also include other more or fewer windows.
  • Other applications may be instant messaging software such as QQ or MSN; or entertainment video software such as iQiyi, Tencent Video, Youku Video, etc.; It can also be a tool-assisted application that comes with electronic device systems such as calculators, calendars, and settings.
  • the user interface 40 may include: a window display area 401, a window display area 402, a window display area 403, a window size control 404, and a status bar 405.
  • the electronic device used by the local user is the electronic device 100. among them:
  • the window display areas 401-403 are used to respectively display the contents of different applications (or the electronic device 100) in the electronic device, and can accept and respond to the user's touch operation in the window display areas 401-403.
  • the status bar 405 may include: operator indicators (for example, the operator’s name "China Mobile"), one or more signal strength indicators of wireless fidelity (wireless fidelity, Wi-Fi) signals, and mobile communication signals (also Called cellular signal) one or more signal strength indicators, time indicators, and battery status indicators.
  • operator indicators for example, the operator’s name "China Mobile”
  • signal strength indicators of wireless fidelity wireless fidelity, Wi-Fi
  • mobile communication signals also Called cellular signal
  • the user is divided into three split screen areas in the first screen of the electronic device.
  • the window display area 401 is used to display the WeChat main interface
  • the window display area 402 is used to display the Didi taxi interface
  • the window display area 403. Used to display the news push interface. If the user needs to share the long screenshot of the window display area 402 for displaying the Didi Taxi interface to a WeChat contact, and the WeChat and news push interface are not included, the following three operation methods can be implemented.
  • the electronic device 100 can first determine which user operation triggered the long screenshot operation, if When multiple touch points move within the target split screen, they move a distance greater than or equal to a second preset distance threshold, the second preset distance threshold is greater than the first preset distance threshold, and the second preset distance threshold is similar to the target If the ratio between the heights of the split screens is greater than the preset ratio threshold, then a long screen capture operation will be performed on the split screen area of the target split screen; or, if within the second time period after the first touch operation is received, the pair will be received again For the first touch operation of the target split screen, perform a long screen capture operation on the split screen area of the target split screen; or, if the first touch operation includes four touch points, perform a long screen capture operation on the split screen area of the target split screen Long screen capture operation.
  • the electronic device When the multiple touch points of the first touch operation move within the target split screen by a distance greater than or equal to the second preset distance threshold, the electronic device performs a long screen capture operation on the target split screen.
  • FIG. 4B and FIG. 4C are schematic diagrams of a set of long screenshot user interfaces provided by an embodiment of the present application.
  • the electronic device 10 further includes a window size control 404, and the window size control 404 is used to adjust the proportion of the window in the user interface.
  • the electronic device 100 can detect a touch operation (such as a drag operation on the window size control 406) that acts on the window size control 406, and in response to the operation, the electronic device 100 can adjust the size between the current window 401 and 402 Make adjustments.
  • the electronic device When the multiple touch points move a distance greater than or equal to a second preset distance threshold while moving within the target split screen, the electronic device performs a long screen capture operation on the target split screen, wherein the long screen capture
  • the screenshot range of the target split screen can also include the next part of the target split screen that is proportional to the part of the target split screen whose movement distance exceeds the second preset threshold when the multiple touch points move within the target split screen.
  • the second preset distance threshold is greater than the first preset distance threshold
  • the ratio between the second preset distance threshold and the target split-screen height is greater than the preset Set a ratio threshold, where the height of the target split screen can be understood as a side length that is consistent with the moving direction of the multiple touch points when they move within the target split screen.
  • FIG. 4D and FIG. 4E are schematic diagrams of the user interface of another group leader taking screenshots according to an embodiment of the present application.
  • the electronic device 100 When the electronic device 100 detects the first touch operation through the touch sensor 180K in the second time period after the first touch operation is detected by the touch sensor 180K again, the electronic device 100 divides the screen into the target. Perform a long screen capture operation, where the screenshot range of the long screen capture may not only include the current display content of the target split screen, but also include the judgment within the long screen according to the number of consecutively received first touch operations in the second time period Content. For example, every time the first touch operation is received in the second time period, the display content of the next page of the target split screen can be added to the current display content.
  • the electronic device performs a long screen capture operation on the target split screen.
  • FIG. 4F and FIG. 4G are schematic diagrams of a multi-window user interface provided by an embodiment of the present application.
  • the electronic device 100 detects that four touch points are included in the first touch operation through the touch sensor 180K, the electronic device performs a long screen capture operation on the target split screen, where the screenshot range of the long screen capture can be excluded from the target split screen.
  • the electronic device also includes the four touch points moving in the first screen greater than or equal to the first preset distance threshold after the movement operation, the time that the four touch points stay at the end position of the movement operation is equal to It is proportional to the display content of the next page of the target split screen; or, the screenshot range of the long screen capture may include the current display content of the target split screen, and also include the four touch points moving greater than or equal to the first screen in the first screen.
  • the four touch points at the end position of the movement operation correspond to the display content of the next page of multiple target split screens proportional to the pressure value; for example, when the user slides down with four fingers, The longer the stay at the end position, the more display content corresponding to the final long screenshot obtained; another example: when the user slides down with four fingers, the greater the pressure value corresponding to the end position, the longer the final screenshot corresponds to the display The more content.
  • the electronic device of this application can be a smart terminal, a vehicle-mounted smart terminal, a smart TV, a wearable device, etc., a touch-sensitive smart terminal that can be displayed in multiple windows in the display area, for example: a smart terminal with a display screen , It can also be a touchable double-sided screen terminal, folding screen terminal, touch computer, tablet, touch TV, full screen mobile phone, etc.
  • the long screen capture command can be triggered according to different operation modes, and the long screen capture operation of the target split screen can be executed, and the long screen capture operation can be performed in a variety of touch modes, which avoids cumbersome screen capture operations and users.
  • the problem of long screen capture time improves the user's screen capture experience.
  • FIG. 5A is a schematic flowchart of a screen capture method provided by an embodiment of the present application
  • FIG. 5B is a schematic flowchart of another screen capture method provided by an embodiment of the present application.
  • the following describes the electronic device as the main body of execution.
  • the method may include the following steps S501 to S503, as shown in FIG. 5A; and may also include steps S504 to S508, as shown in FIG. 5B.
  • Step S501 Determine the first touch operation.
  • the electronic device determines a first touch operation, and the first touch operation is a movement operation in which a plurality of touch points move within the first screen by greater than or equal to a first preset distance threshold, and the first screen Including N split screens, where N is a positive integer greater than 1. It is understandable that the electronic device can receive multiple touch operations, and determine the first touch operation from the multiple touch operations, that is, the multiple touch points move within the first screen greater than or equal to the first preset A distance threshold movement operation, wherein the multiple touch points are at least two or more touch points, and the first preset distance threshold needs to reach the shortest of the smallest split screen among the N split screens included in the first screen The preset ratio of side lengths.
  • the first preset threshold needs to reach the shortest side length preset ratio of the smallest split screen among the N split screens included in the first screen to prevent the screen capture operation from being unable to be triggered. That is, the first touch operation, for example, the first preset distance threshold is one-third of the shortest side length of the minimum split screen.
  • Step S502 Determine whether the starting positions of the multiple touch points when moving in the first touch operation are all within the target sub-screen of the first screen.
  • the electronic device determines whether the starting positions of the multiple touch points when moving in the received first touch operation are all within the target screen division of the first screen, where the target screen division is Any one of the N split screens.
  • the multiple touch points in the first touch operation can be touched by the user with fingers, or the user can operate with the help of an external touch tool, where the touch tool includes but is not limited to Stylus, touch gloves, etc.
  • the number of touch points can be two or more than two touch points.
  • multiple touch points may be performed at the same time.
  • the second preset distance threshold is greater than If the first preset distance threshold and the ratio between the second preset distance threshold and the target split screen height are greater than the preset ratio threshold, a long screen capture operation is performed on the split screen area of the target split screen Or, if in the second time period after receiving the first touch operation, the first touch operation of the target split screen is received again, then the split screen of the target split screen Area to perform the long screen capture operation; or, if the first touch operation includes four touch points, perform the long screen capture operation on the split screen area of the target split screen.
  • the embodiments of the present application also support users to perform long screen capture operations on a single window. If the user uses the three-finger swipe gesture to take a screenshot, the embodiment of this application will determine whether the user has the intention of taking a long screenshot by judging the number of consecutive three-finger swipes within a certain period of time; or by judging the user's screenshot gesture (three-finger slide) The end position is the position where the three fingers leave the screen to determine whether the user needs a long screenshot; or when the user slides down with four fingers, the long screenshot operation can also be triggered.
  • the electronic device when the multiple touch points move a distance greater than or equal to the second preset distance threshold when moving within the target split screen, the electronic device performs a long screen capture operation on the target split screen.
  • the screenshot range of the long screenshot may include, in addition to the current display content of the target split screen, the target that is proportional to the part where the moving distance of the multiple touch points moves in the target split screen exceeds the second preset threshold.
  • FIG. 5C is a schematic diagram of an operation based on a long screenshot of the aforementioned electronic device 100 provided by an embodiment of the present application.
  • the second preset distance threshold is greater than the first preset distance threshold, and the ratio between the second preset distance threshold and the target split screen height is greater than the preset ratio threshold, where
  • the height of the target split screen can be understood as a side length consistent with the moving direction of the multiple touch points when they move within the target split screen. For example: if the start position of the multiple touch points when moving within the target split screen is at the top of the target split screen, and the end position is at the bottom of the target split screen, then a long screen capture operation is performed on the target split screen.
  • the electronic device divides the target The screen executes a long screen capture operation, where the screenshot range of the long screen capture may include, in addition to the current display content of the target split screen, it also includes the judgment of the long screen capture based on the number of consecutively received first touch operations in the second time period Content. For example, every time the first touch operation is received in the second time period, the display content of the next page of the target split screen can be added to the current display content. It is understandable that in the second time period, the more the first touch operation is received, the more display content of the target split screen included in the long screenshot, until all the display content of the target split screen is intercepted .
  • the electronic device performs a long screen capture operation on the target split screen, wherein the screenshot range of the long screen capture can be excluded from the current target split screen.
  • the electronic device performs a long screen capture operation on the target split screen, wherein the screenshot range of the long screen capture can be excluded from the current target split screen.
  • it also includes a movement operation where the four touch points move within the first screen greater than or equal to the first preset distance threshold, and the time that the four touch points stay at the end position of the movement operation is proportional to The display content of the next page of the target split screen; or, the screenshot range of the long screen capture may include the current display content of the target split screen, and also include the four touch points moving greater than or equal to the first screen in the first screen
  • the four touch points at the end position of the movement operation correspond to the display content of the next page of multiple target split screens that are proportional to the pressure value; for example, when the user slides down with four fingers, The longer the end position stays, the more display content corresponding to the long screenshot finally obtained; another example
  • Step S503 If the starting positions of the multiple touch points when they move are all within the target sub-screen of the first screen, capture the current display content in the target sub-screen as a first screenshot.
  • Figure 5D is a set of screenshots of a single screen provided in an embodiment of the application and shared to the current split screen user interface schematic diagram, where the (1) picture in Figure 5D describes the target window (Ie, application B) the three-finger slide operation (ie, the first touch operation); the (2) picture describes that when the three-finger slides in the target window, only the current interface of the target window is captured and the first A screenshot, where the first screenshot can receive a touch operation (for example, a drag operation), and the first screenshot can also be a screenshot thumbnail; the (3) picture describes the first screenshot after receiving the drag operation It is shared to app C.
  • the target window Ie, application B
  • the three-finger slide operation ie, the first touch operation
  • the (2) picture describes that when the three-finger slides in the target window, only the current interface of the target window is captured and the first A screenshot, where the first screenshot can receive a touch operation (for example, a drag operation), and the first screenshot can also be a screenshot thumbnail; the (3) picture describes the first screenshot after receiving
  • the picture (4) describes the method flow described in the first three pictures, that is, for example: when a user uses a folding screen device to perform multi-window operations, if the user uses a three-finger swipe gesture to take a screenshot, this application will pass Determine the starting position of the user's screen capture gesture (three-finger slide), and determine the user's intention in advance. If the starting point of the three-finger slide is at the top of a single window, a screenshot of the window will be taken.
  • Figure 5E is a set of screenshots of the entire screen provided in an embodiment of the present application and shared to the user interface diagram in the current split screen.
  • the picture (1) in Figure 5E describes that the user is at least A three-finger slide operation (ie, the first touch operation) is performed in two windows (ie, application A and application B); the picture (2) depicts the screenshot after three-finger slides in at least two windows
  • the display content of the current display interface of the first screen has obtained the third screenshot; the (3) picture describes the screenshot thumbnail corresponding to application C and is shared to application A after receiving the drag operation, where the drag operation is terminated
  • the location is the split screen where application A is located.
  • Step S504 If the starting positions of the multiple touch points when moving are all within the target sub-screen of the first screen, capture the current display content of the N sub-screens in the first screen as corresponding screenshots, and generate a second Screenshots.
  • the method further includes: the second screenshot generated by the electronic device includes Screenshots corresponding to the N split screens, the screenshots are arranged in the second screenshot in a manner in which the corresponding split screens are distributed in the first screen, and the screenshots corresponding to the N split screens are all permitted Respectively receive thumbnails of touch operations.
  • the electronic device determines that the starting positions of the multiple touch points when moving in the first touch operation are all within the target sub-screen of the first screen, it will not only capture the current display content in the target sub-screen as the first A screenshot, the current display content of the N split screens in the first screen can also be respectively captured as corresponding screenshots; a second screenshot is generated, and the second screenshot includes the screenshots corresponding to the N split screens The screenshots are arranged in the second screenshot in a manner in which the corresponding split screens are distributed in the first screen, and the screenshots corresponding to the N split screens are all thumbnails that are permitted to receive touch operations respectively.
  • Figure 5F is another set of screenshots of a single screen provided in the embodiment of the application and shared to the user interface schematic diagram in the current split screen, where the (1) picture in Figure 5F describes in the target The three-finger swipe operation (ie, the first touch operation) in the window (ie, application B); the (2) picture depicts a screenshot of the current display interface of the first screen when the three-finger swipe down in the target window A second screenshot is obtained for the display content of all the split screens of.
  • the second screenshot includes the screenshot thumbnail corresponding to application A, the screenshot thumbnail corresponding to application B, and the screenshot thumbnail corresponding to application C.
  • these three The screenshot thumbnails can independently receive touch operations (such as drag operations).
  • the picture (3) describes that the corresponding screenshot thumbnail of application C receives the drag operation; the picture (4) describes the application C
  • the corresponding screenshot thumbnail is shared to application B after receiving the drag operation. It should be noted that the first screenshot and the second screenshot are suspended on the first screen, and there is no intersection between the first screenshot and the second screenshot.
  • a knuckle screenshot operation or a button combination press screenshot operation receiving a knuckle screenshot operation or a button combination press screenshot operation; according to the knuckle joint screenshot operation instruction or the button combination press screenshot operation instruction, the current display content of all the split screens in the first screen are respectively captured Is a corresponding screenshot; a second screenshot is generated, the second screenshot includes the screenshots corresponding to the N split screens, and the screenshots are distributed in the second screen in the manner of the corresponding split screens in the first screen.
  • the screenshots are arranged within the screenshots, and each of the screenshots corresponding to the N split screens is a thumbnail that is permitted to receive touch operations respectively.
  • Figure 5G is another set of screenshots of a single screen and shared to the user interface diagram in the current split screen provided in the embodiment of this application, where the (1) picture in Figure 5G describes the electronic device After receiving the knuckle screenshot operation or the key combination press screenshot operation; the second picture describes the electronic device after receiving the knuckle screenshot operation or the key combination press screenshot operation, and the screenshot of the current display interface of the first screen
  • the second screenshot is obtained for the display content of all the split screens, where the second screenshot includes the screenshot thumbnail corresponding to application A, the screenshot thumbnail corresponding to application B, and the screenshot thumbnail corresponding to application C.
  • these three screenshots Thumbnails can independently receive touch operations (eg, drag operations).
  • the picture (3) describes the screenshot corresponding to application C and the thumbnail receives the drag operation; picture (4) describes the application C corresponds to After receiving the drag operation, the screenshot thumbnail of is shared to App B.
  • Step S505 Receive a second touch operation.
  • the second touch operation is a click operation on a target screenshot thumbnail in the first screenshot or the second screenshot
  • the target screenshot thumbnail is the first screenshot or the second screenshot At least one of the screenshot thumbnails corresponding to the N split screens.
  • the target screenshot thumbnail included in the second touch operation may be multiple of the screenshot thumbnails corresponding to all N split screens in the first screenshot or the second screenshot. Therefore, the multiple screenshots can be One is spliced into a screenshot according to the second touch operation. If the shapes of the multiple screenshots spliced together are irregular, the electronic device can automatically supplement the missing parts, so that the geometric shapes of the spliced graphics are regular.
  • Step S506 According to the second touch operation, save the target screenshot thumbnail to the gallery, and delete all screenshots in the first screenshot or the second screenshot except the target screenshot thumbnail.
  • Figure 5H is another set of user interface diagrams provided in an embodiment of the present application to capture the entire screen and share it in the current split screen, where the (1) picture in Figure 5H describes the electronic device After receiving the knuckle screenshot operation or the button combination press screenshot operation; the second picture describes the electronic device in the first time period after receiving the knuckle screenshot operation or the button combination press screenshot operation, because it did not receive the screenshot operation.
  • the second touch operation is described, so all the screenshots in the second screenshot are spliced into one screenshot, which includes the screenshot thumbnail corresponding to application A, the screenshot thumbnail corresponding to application B, and the screenshot thumbnail corresponding to application C ;
  • Picture (3) describes that the spliced screenshot thumbnail receives a drag operation;
  • picture (4) describes that the spliced screenshot thumbnail receives a drag operation and is shared to application C.
  • the picture (5) describes the process of the method described in the first four pictures, that is, for example: when the user uses a folding screen device to perform multi-window operations, if the user uses knuckles to take a screenshot or presses a combination of keys to take a screenshot, the current screenshot scheme All windows on the current screen will be taken screenshots.
  • This application presents the thumbnails of all windows after the screenshot operation is completed in a puzzle manner in the corner of the screen.
  • the user can select the screenshots on demand later. If the user does not make a selection within a certain period of time, the thumbnails of the multi-window screenshots will be Spliced into a whole thumbnail, all screenshots will be spliced into one image and saved to the gallery.
  • Step S507 Receive a drag operation on the target screenshot thumbnail.
  • the drag operation is an operation of moving the target screenshot thumbnail through at least one touch point.
  • a user uses a folding screen device for multi-window operations and wants to insert a screenshot of application B into application C
  • the user only needs to slide three fingers from the top of the window in application B to complete the screenshot of the interface of application B operating.
  • you drag and drop the screenshot thumbnail of application B and place the thumbnail on the window of application C you can insert the screenshot in application C to complete content sharing.
  • the operation described in the picture (2) in the above Figure 5D, the operation described in the picture (2) in the above Figure 5E, and the operation described in the picture (3) in the above Figure 5F The operation described in the (3) picture in the above Figure 5G and the operation described in the (3) picture in the above Figure 5H.
  • Step S508 According to the drag operation, control the target screenshot thumbnail to be shared or inserted into the split screen area where the end position of the drag operation is located.
  • the electronic device may control the target screenshot thumbnail to be shared or inserted into the split screen area where the end position of the drag operation is located according to the drag operation.
  • the situation described in the picture (3) in the above Figure 5D the situation described in the picture (3) in the above Figure 5E, and the picture (4) in the above Figure 5F
  • FIG. 5I is a schematic diagram of a user interface in an application that captures a single screen and shares it in an application included in the current interface in a set of practical applications provided in an embodiment of the present application. Among them, (1) in FIG.
  • the picture describes the three-finger slide operation in the split screen where the gallery is located; the second picture describes the screenshot of the display content of the gallery after the three-finger slide in the split screen where the gallery is located. After a screenshot is dragged and shared to the split screen where the phone contact is located, the screenshot can be inserted into the phone contact.
  • the electronic device may determine the first touch operation from the received multiple touch operations, and determine whether the starting positions of the multiple touch points in the first touch operation are all in the first touch operation.
  • the target split screen of one screen If yes, the current display content in the target split screen is captured as the first screenshot, and the current display content in the first screen is captured to generate a second screenshot; if not, the current display content in the first screen is captured as the third screenshot Screenshots.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to enable a computer device (which may be a personal computer, a server or a network device, etc., specifically a processor in a computer device) to execute all or part of the steps of the above methods of the various embodiments of the present application.
  • the aforementioned storage medium may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (Read-Only Memory, abbreviation: ROM) or random access memory (Random Access Memory, abbreviation: RAM), etc.
  • the medium of the program code may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (Read-Only Memory, abbreviation: ROM) or random access memory (Random Access Memory, abbreviation: RAM), etc.
  • the medium of the program code may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (Read-Only Memory, abbreviation: ROM) or random access memory (Random Access Memory, abbreviation: RAM), etc.
  • the medium of the program code may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (Read-Only Memory, abbreviation: ROM) or random access memory (Random Access Memory, abbreviation: RAM), etc.

Abstract

本申请实施例公开了一种屏幕截取方法及相关设备,其中一种屏幕截取方法,包括:确定第一触控操作,所述第一触控操作为多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,所述第一屏幕包括N个分屏,其中,N为大于1的正整数;判断所述第一触控操作中所述多个触控点移动时的起始位置是否均在所述第一屏幕的目标分屏内,所述目标分屏为所述N个分屏中的任意一个;若所述多个触控点移动时的起始位置均在所述第一屏幕的目标分屏内,将所述目标分屏内的当前显示内容截取为第一截图。通过本申请实施例提供的方法,可以使用户在电子设备显示屏多窗口的情况下更便捷的截取单独的小窗口。

Description

一种屏幕截取方法及相关设备
本申请要求于2019年09月06日提交中国专利局、申请号为201910846710.9、申请名称为“一种屏幕截取方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,尤其涉及一种屏幕截取方法及相关设备。
背景技术
随着网络条件日益成熟,用户设备的功能越来越丰富,电子设备中的截屏功能为在日常使用频率日益增高。在截屏时,人们可以使用指关节触控、组合按键截取等等操作实现对终端的整个屏幕的显示内容进行截图。但是,随着电子设备多窗口、多任务的使用情况在用户日常生活中越来越频繁的出现,对多窗口中的单个窗口的屏幕内容以及多窗口内所有窗口的屏幕内容的获取以及分享也渐渐成为了高频日常任务。然而,现有的截屏操作大多是针对终端设备的一整个显示屏进行截图操作,针对现有的一个显示屏的多个分屏窗口,甚至是双面屏终端,折叠屏终端的两个显示窗口中单独截取一个窗口时,需要先将整个显示屏幕截取后,再需要用户在图片编辑软件中手动操作去掉截屏图片中其他屏幕内显示的信息,最后获得单独一个屏幕的截图,这个过程繁琐复杂,降低了截屏效率,同时,由于需要借助图片编辑软件,也降低了截屏的灵活性。
那么如何使用户在多窗口下,更便捷的截取窗口,是亟待解决的问题。
发明内容
鉴于上述问题,提出了本申请以便提供一种克服上述问题或者至少部分地解决上述问题的一种屏幕截取方法及相关设备。
第一方面,本申请实施例提供了一种屏幕截取方法,可包括:
确定第一触控操作,第一触控操作为多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,第一屏幕包括N个分屏,其中,N为大于1的正整数;
判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内,目标分屏为N个分屏中的任意一个;
若多个触控点移动时的起始位置均在第一屏幕的目标分屏内,将目标分屏内的当前显示内容截取为第一截图。
通过第一方面提供的方法,本申请实施例可以从接收的多个触控操作中,确定第一触控操作,并判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内。若是,则将目标分屏内的当前显示内容截取为第一截图,因此,这种判断出多个触控点的起始位置都在同一个屏幕内后,可以在多个窗口中直接获得单独一个屏幕的截图,可以实现灵活的快速获取目标分屏区域的截图的目的,使得用户能够在较短的时间内获得截图,同时不需要放入图片编辑软件中手动去除截屏图片中其他分屏内显示的信息,简化 了用户截图操作。
在一种可能实现的方式中,上述第一截图为缩略图;若上述多个触控点移动时的起始位置均在上述第一屏幕的目标分屏内,上述方法还包括:将上述第一屏幕内上述N个分屏的当前显示内容分别截取为对应的截图;生成第二截图,上述第二截图中包括上述N个分屏对应的截图,上述截图以对应的分屏在上述第一屏幕内的分布方式在上述第二截图内排列,且上述N个分屏对应的截图均为准许分别接收触控操作的缩略图。在本申请实施例中,电子设备首先判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内,若是,不仅仅将目标分屏内的当前显示内容截取为第一截图,还需要将第一屏幕内上述N个分屏的当前显示内容分别截取为对应的截图,并生成第二截图。需要说明的是,该第二截图中N个分屏对应的截图,都可以分别单独接收触控操作,即,第二截图中包含的N个分屏对应的截图是相互独立存在的,彼此之间没有连接关系,而且,N个分屏对应的截图的分布方式与对应分屏在第一屏幕内的分布方式相同。例如:第一屏幕内有第一分屏和第二分屏,接收到用户对第一分屏的第一触控操作后,将上述第一分屏截图获得对应的第一截图,再将上述第一分屏和第二分屏分别截图后,生成对应的第二截图,其中,该第一截图和该第二截图都为缩略图,悬浮在第一屏幕上。综上所述,对目标分屏截图后,再截图所有分屏可以防止电子设备对用户操作意图的误判断,或者用户误操作导致的错误判断目标分屏,提高了截图的准确率和截图的用户体验。
在一种可能实现的方式中,上述第一截图为缩略图;上述方法还包括:接收指关节截屏操作或者按键组合按压截屏操作;根据上述指关节截屏操作指令或者上述按键组合按压截屏操作指令,将上述第一屏幕内所有分屏的当前显示内容分别截取为对应的截图;生成第二截图,上述第二截图中包括上述N个分屏对应的截图,上述截图以对应的分屏在上述第一屏幕内的分布方式在上述第二截图内排列,且上述N个分屏对应的截图中的每一个截图均为准许分别接收触控操作的缩略图。在本申请实施例中,本申请实施例所生成的第二截图包括分屏对应的截图缩略图,该多个分屏对应的截图缩略图彼此之间相互独立,而且每个分屏对应的缩略图都可以单独接收触控操作,以便用户进行分享编辑,也可以使得用户不需要在进入图片编辑软件,对整张截图单独编辑、裁剪以获得单个分屏的截图,大大提升了用户截图体验,简化了截图操作过程。
在一种可能实现的方式中,上述方法还包括:接收第二触控操作,上述第二触控操作为对上述第一截图或上述第二截图内目标截屏缩略图的点击操作,上述目标截屏缩略图为上述第一截图或上述第二截图内上述N个分屏对应的截屏缩略图中的至少一个;根据上述第二触控操作,将上述目标截屏缩略图保存至图库,并删除上述第一截图或上述第二截图内除上述目标截屏缩略图外所有的截图。在本申请实施例中,第二触控操作可以用于选择上述第一截图或上述第二截图内的目标截屏缩略图,以确定用户在多个截图中所希望保留的目标截屏缩略图。同时,在对上述第一截图或上述第二截图内目标截屏缩略图进行点击操作时,可以将除目标截屏缩略图外的剩余所有的截图都删除,需要说明的是,目标截屏缩略图可以为多个分屏对应的截图。第二操作的实现可以预防电子设备对用户第一操作的误判;或者,在接收到指关节截屏操作指令或者按键组合按压截屏操作指令,且想截取单个分屏时,不需要在图片编辑软件中进行二次编辑,简化了用户截图操作。
在一种可能实现的方式中,上述方法还包括:接收对上述目标截屏缩略图的拖拽操作,上述拖拽操作为通过至少一个触控点将上述目标截屏缩略图进行移动的操作;根据上述拖拽操作,控制上述目标截屏缩略图分享或插入至上述拖拽操作移动的终止位置所在的分屏区域内。在本申请实施例中,若将上述目标分屏进行分享至所述第一屏幕内的N个分屏中的任意一个分屏所包括的应用内时,可以直接通过针对目标截屏缩略图的拖拽操作实现,根据该拖拽操作,可以控制上述目标截屏缩略图分享或插入至上述拖拽操作移动的终止位置所在的分屏区域内,简化了现有技术中,如果想要将目标截屏缩略图分享至第一屏幕内的包含的分屏时,除分享操作外,还需要选择目标应用的操作,相比于直接拖拽至目标应用的方式,简化了目标截屏缩略图的分享过程,提高了用户操作体验。
在一种可能实现的方式中,上述方法还包括若在接收到上述第一触控操作、上述指关节截屏操作指令或者上述按键组合按压截屏操作指令中的任何一个操作之后的第一时间段内,未接收到上述第二触控操作,则将上述第二截图内的所有截屏拼接成一张图,并与上述第一截图保存至上述图库,因此,在本申请实施例中,在接收到用户发出的截图指令后,一段时间内没有再次接收到针对根据该截图指令获得的截图的其他操作时,可以将获得的截图直接保存,方便后续用户再次使用时,不用再次截图浪费资源,其中,该截图指令可以包括上述第一触控操作、上述指关节截屏操作指令或者上述按键组合按压截屏操作指令中的任何一个操作。
在一种可能实现的方式中,上述方法还包括:若上述多个触控点移动时的起始位置不均在上述第一屏幕的目标分屏内,将上述第一屏幕内的当前显示内容截取为第三截图并保存至上述图库。在本申请实施例中,电子设备首先判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内,若不是,即,第一触控操作中多个触控点移动时的起始位置在至少两个分屏的区域内时,则将上述第一屏幕内所有分屏的当前显示内容截取为第三截图,因此,在本申请实施例中,若用户需要截取整张屏幕内的显示内容时,只需要将多个触控点移动时的起始位置处于至少两个分屏的区域内,就可以针对第一屏幕的截图,避免了繁琐的截屏操作、用户截屏时间较长的问题,提高了用户截屏体验。
在一种可能实现的方式中,上述方法还包括:若上述多个触控点在上述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度,上述第二预设距离阈值大于上述第一预设距离阈值,且上述第二预设距离阈值与上述目标分屏高度之间的比例大于预设比例阈值,则对上述目标分屏的分屏区域执行长截屏操作;或者,若在接收到上述第一触控操作之后的第二时间段内,再次接收到对上述目标分屏的上述第一触控操作,则对上述目标分屏的分屏区域执行上述长截屏操作;或者,若上述第一触控操作中包含四个触控点,则对上述目标分屏的分屏区域执行上述长截屏操作。在本申请实施例中,电子设备可以对第一屏幕内的目标分屏进行长截屏操作,该目标分屏可以是上述N个分屏中的任意一个。可以理解的是,触发长截屏指令的操作方式不同,执行所述长截屏操作后的结果也不相同,多种触控方式进行长截屏操作,避免了繁琐的截屏操作、用户截屏时间较长的问题,提升了用户截屏体验。
第二方面,本申请实施例提供了一种电子设备,该电子设备包括:一个或多个处理器、 存储器、一个或多个按键;
存储器、显示屏、一个或多个按键与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,一个或多个处理器执行计算机指令以执行:
确定第一触控操作,第一触控操作为多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,第一屏幕包括N个分屏,其中,N为大于1的正整数;
判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内,目标分屏为N个分屏中的任意一个;
若多个触控点移动时的起始位置均在第一屏幕的目标分屏内,将目标分屏内的当前显示内容截取为第一截图。
在一种可能实现的方式中,上述第一截图为缩略图;该还用于:若上述多个触控点移动时的起始位置均在上述第一屏幕的目标分屏内,将上述第一屏幕内上述N个分屏的当前显示内容分别截取为对应的截图;生成第二截图,上述第二截图中包括上述N个分屏对应的截图,上述截图以对应的分屏在上述第一屏幕内的分布方式在上述第二截图内排列,且上述N个分屏对应的截图均为准许分别接收触控操作的缩略图。
在一种可能实现的方式中,上述第一截图为缩略图;该还用于:接收指关节截屏操作或者按键组合按压截屏操作;根据上述指关节截屏操作指令或者上述按键组合按压截屏操作指令,将上述第一屏幕内所有分屏的当前显示内容分别截取为对应的截图;生成第二截图,上述第二截图中包括上述N个分屏对应的截图,上述截图以对应的分屏在上述第一屏幕内的分布方式在上述第二截图内排列,且上述N个分屏对应的截图中的每一个截图均为准许分别接收触控操作的缩略图。
在一种可能实现的方式中,该还用于:接收第二触控操作,上述第二触控操作为对上述第一截图或上述第二截图内目标截屏缩略图的点击操作,上述目标截屏缩略图为上述第一截图或上述第二截图内上述N个分屏对应的截屏缩略图中的至少一个;根据上述第二触控操作,将上述目标截屏缩略图保存至图库,并删除上述第一截图或上述第二截图内除上述目标截屏缩略图外所有的截图。
在一种可能实现的方式中,该还用于:接收对上述目标截屏缩略图的拖拽操作,上述拖拽操作为通过至少一个触控点将上述目标截屏缩略图进行移动的操作;根据上述拖拽操作,控制上述目标截屏缩略图分享或插入至上述拖拽操作移动的终止位置所在的分屏区域内。
在一种可能实现的方式中,该还用于:若在接收到上述第一触控操作、上述指关节截屏操作指令或者上述按键组合按压截屏操作指令中的任何一个操作之后的第一时间段内,未接收到上述第二触控操作,则将上述第二截图内的所有截屏拼接成一张图,并与上述第一截图保存至上述图库。
在一种可能实现的方式中,该还用于:若上述多个触控点移动时的起始位置不均在上述第一屏幕的目标分屏内,将上述第一屏幕内的当前显示内容截取为第三截图并保存至上述图库。
在一种可能实现的方式中,该还用于:若上述多个触控点在上述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度,上述第二预设距离阈值大于上述第一预设 距离阈值,且上述第二预设距离阈值与上述目标分屏高度之间的比例大于预设比例阈值,则对上述目标分屏的分屏区域执行长截屏操作;或者,若在接收到上述第一触控操作之后的第二时间段内,再次接收到对上述目标分屏的上述第一触控操作,则对上述目标分屏的分屏区域执行上述长截屏操作;或者,若上述第一触控操作中包含四个触控点,则对上述目标分屏的分屏区域执行上述长截屏操作。
第三方面,本申请实施例提供一种计算机存储介质,包括计算机指令,当该计算机指令在电子设备上运行时,使得该电子设备执行本申请实施例第一方面或第一方面的任意一种实现方式提供的屏幕截取方法。
第四方面,本申请实施例提供了一种计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行本申请实施例第一方面或第一方面的任意一种实现方式提供的屏幕截取方法。
可以理解的是,上述提供的第二方面提供的电子设备、第三方面提供的计算机存储介质,以及第四方面提供的计算机程序产品均可用于执行第一方面所提供的屏幕截取方法,因此,其所能达到的有益效果可参考第一方面所提供的屏幕截取方法中的有益效果,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1A是本申请实施例提供的一种电子设备100的结构示意图。
图1B是本申请实施例提供的一种电子设备100的软件结构框图。
图2A是本申请实施例提供的一种多窗口的用户界面示意图。
图2B是本申请实施例提供的一种识别第一触控操作时的用户操作示意图。
图2C是本申请实施例提供的一种确定移动距离与第一预设距离阈值之间的对比关系示意图。
图2D是本申请实施例提供的一种通过第一触控操作将目标分屏的显示内容截取为图片的用户界面。
图2E是本申请实施例提供的另一种通过第一触控操作将目标分屏的显示内容截取为图片的用户界面。
图2F至图2H是本申请实施例提供的一组确认并保存并分享截屏缩略图的用户界面。
图2I至图2K是本申请实施例提供的另一组确认并保存截屏缩略图的用户界面。
图3A是本申请实施例提供的另一种多窗口的用户界面示意图。
图3B是本申请实施例提供的另一种识别第一触控操作时的用户操作示意图。
图3C和图3D是本申请实施例提供的一组通过第一触控操作将第一屏幕内的显示内容截取为图片并分享的用户界面。
图3E是本申请实施例提供的另一组通过第一触控操作将第一屏幕内的显示内容截取为图片的用户界面。
图4A是本申请实施例提供的又一种多窗口的用户界面示意图。
图4B和图4C是本申请实施例提供的一组长截屏的用户界面示意图。
图4D和图4E是本申请实施例提供的另一组长截屏的用户界面示意图。
图4F和图4G是本申请实施例提供的一种多窗口的用户界面示意图。
图5A是本申请实施例提供的一种屏幕截取方法的流程示意图。
图5B是本申请实施例提供的另一种屏幕截取方法的流程示意图。
图5C是本申请实施例提供的一种基于前述电子设备100长截屏的操作示意图。
图5D是本申请实施例中提供的一组截图单个屏幕并分享至当前分屏中的用户界面示意图。
图5E是本申请实施例中提供的一组截图整个屏幕并分享至当前分屏中的用户界面示意图。
图5F是本申请实施例中提供的另一组截图单个屏幕并分享至当前分屏中的用户界面示意图。
图5G是本申请实施例中提供的又一组截图单个屏幕并分享至当前分屏中的用户界面示意图。
图5H是本申请实施例中提供的另一组截取整个屏幕并分享至当前分屏中的用户界面示意图。
图5I是本申请实施例中提供的一组实际应用中的截取单个屏幕并分享至当前界面包含的应用中的用户界面示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例进行描述。
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部 件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
本申请的说明书和权利要求书及附图中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在终端设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图片、文字、按钮等控件。控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、按钮(button)、滚动条(scrollbar)、图片和文本。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybrid application)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,GTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如GTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
首先,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。
(1)窗口,窗口是用户界面中最重要的部分,它是屏幕上与一个应用程序相对应的矩形区域,包括框架和客户区,是用户与产生该窗口的应用程序之间的可视界面。每当用户开始运行一个应用程序时,应用程序就创建并显示一个窗口;当用户操作窗口中的对象时,程序会作出相应反应。用户通过关闭一个窗口来终止一个程序的运行;通过选择相应的应用程序窗口来选择相应的应用程序。
(2)触摸传感器,是一种捕获和记录设备和/或物体上的物理触摸或拥抱的设备。它使设备或对象能够通常由人类用户或操作员检测触摸。触摸传感器也可以称为触摸检测器。
接下来,介绍本申请以下实施例中提供的示例性电子设备。
请参考附图1A,图1A是本申请实施例提供的一种电子设备100的结构示意图,其中,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器 192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存 储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based  augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应 用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触控操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触控操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触控操作强度的触控操作,可以对应不同的操作指令。例如:当有触控操作强度小于第一压力阈值的触控操作作用于短消息应用图标时,执行查看短消息的指令。当有触控操作强度大于或等于第一压力阈值的触控操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。
气压传感器180C用于测量气压。
磁传感器180D包括霍尔传感器。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当 电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。
环境光传感器180L用于感知环境光亮度。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触控操作。触摸传感器可以将检测到的触控操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触控操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触控操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触控操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中, 不能和电子设备100分离。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。请参考附图1B,图1B是本申请实施例提供的一种电子设备100的软件结构框图。
可以理解的是,本申请实施例示意的软件结构框图并不构成对电子设备100的软件结构框图具体限定。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图1B所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图1B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程 管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,G.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
图1B所示的软件系统涉及到使用分享能力的应用呈现(如图库,文件管理器),提供分享能力的即时分享模块,提供打印能力的打印服务(print service)和打印后台服务(print spooler),以及应用框架层提供打印框架、WLAN服务、蓝牙服务,以及内核和底层提供WLAN蓝牙能力和基本通信协议。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸操作,该触摸操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过3D摄像模组193捕获静态图像或视频。
下面介绍本申请实施例涉及的几个应用场景以及各个应用场景下的用户界面(user interface,UI)实施例。需要说明的是,本申请实施例中提到的第一屏幕可以理解为本申请实施例电子设备中的用户界面。
场景一,截取单个分屏的应用场景。(多个触控点移动时的起始位置均在第一屏幕的目标分屏内)。
请参考附图2A,图2A是本申请实施例提供的一种多窗口的用户界面示意图。该用户界面包括了微信窗口201、滴滴打车窗口201以及新闻推送窗口203。不限于此,用户界面还可以包括其他更多或更少的窗口,其他应用程序例如可以是QQ或者MSN等即时通讯软件;也可以是爱奇艺、腾讯视频、优酷视频等娱乐视频类软件;还可以是计算器、日历、设置等电子设备系统自带的工具辅助类应用。
如图2A所示,用户界面20可以包括:窗口显示区201、窗口显示区202、窗口显示区203、窗口尺寸控件205、状态栏206。本端用户使用的电子设备为电子设备100。其中:
窗口显示区201-203用于分别显示电子设备中不同应用的内容,且可接受用户对窗口显示区201-203内的触控操作并响应。
窗口尺寸控件206用于调整窗口在用户界面内所占的比例。电子设备100可以检测到 作用于窗口尺寸控件206的触控操作(如,在窗口尺寸控件206上的拖拽操作),响应于该操作,电子设备100可以将移动窗口尺寸控件206的位置,以实现当前窗口202和203之间的尺寸大小进行调整。
状态栏207可以包括:运营商指示符(例如运营商的名称“中国移动”)、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符、移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符、时间指示符和电池状态指示符等。
在该场景下,用户在使用电子设备的第一屏幕内分为三个分屏区域,窗口显示区201用于显示微信主界面、窗口显示区202用于显示滴滴打车界面、窗口显示区203用于显示新闻推送界面。若用户需要仅将滴滴打车分享至微信联系人,不包含微信和新闻推送界面时,现有技术中,需要将此时第一屏幕内所有显示内容截图,再放入图片编辑器中进行裁剪,获得最终只保留滴滴打车界面的图片。该操作可能会导致用户截图操作繁琐,图片编辑时间较长,截图体验不佳。
基于前述应用场景,下面介绍电子设备100上实现的一些UI实施例。
需要将窗口显示区202(即,滴滴打车窗口)内的显示内容截图分享至窗口显示区201(即,微信窗口)时,电子设备100可以通过接收触控操作,确定第一触控操作;再根据确定的第一触控操作包含的多个触控点的起始位置,确定目标分屏;最后将上述目标分屏截图后并分享。
下面从以下几个方面进行详细说明。
(1)如何确定截屏操作。
请参考附图2B,图2B是本申请实施例提供的一种识别第一触控操作时的用户操作示意图。
如图2B所示,电子设备100可以通过触摸传感器180K检测到用户的触控操作(如,触摸传感器180K识别出用户在窗口显示区202上多个触控点做出的移动操作),响应于该触控操作,电子设备100可以识别到该移动操作的运动轨迹。
具体实现中,当触摸传感器180K检测用户的触控操作时,不同的触控操作停留时长、移动的运动轨迹、触控点的数量,例如:三个触控点与四个触控点、多个触控点的起始位置和终止位置分别于分屏之间的位置关系等,可能产生不同的光学和/或声学效果,并生成对应的信号(包括该移动操作对电子设备产生的压力值等)。由不同的触控操作产生的信号可以通过电子设备100的传感器(例如:触摸传感器、加速度传感器、冲击传感器、振动传感器、声传感器、位移传感器、速度传感器等)捕获。其中,请参考附图2C,图2C是本申请实施例提供的一种确定移动距离与第一预设距离阈值之间的对比关系示意图,根据图2C所示的触摸传感器180K检测用户的触控操作的移动距离,判断该移动距离大小是否大于的第一预设距离阈值,若该移动距离大于第一预设距离阈值,则电子设备100可以确定该触控操作为第一触控操作。因而,电子设备100可通过捕获到的信号区分用户使用具体的触控操作,再通过判断触摸传感器180K检测到用户触控操作(即,多个触控点在窗口显示区202内移动了大于或等于第一预设距离阈值的移动操作)是否为第一触控操作;若确定触摸传感器180K检测到用户的触控操作为第一触控操作,则再判断第一触控操作中包含的多个触控点的起始位置是否均在窗口显示区202内,若是,则将窗口显示区202 内的当前显示内容截取为第一截图。此时电子设备100可以通过震动和/或用户界面标识(如:触控操作中的运动轨迹发亮,边界加粗或有虚影等)来指示触摸传感器180K检测到用户触控操作为第一触控操作。
需要说明的是,图2C所示的第二预设距离阈值是用于判断是否对目标分屏进行长截屏操作,其中,所述第二预设距离阈值大于所述第一预设距离阈值,且所述第二预设距离阈值与所述目标分屏高度之间的比例大于预设比例阈值,其中,所述目标分屏的高度可以理解为,与所述多个触控点在所述目标分屏内移动时的移动方向一致的目标分屏的边长。
不限于上述列出的通过触摸传感器180K识别第一触控操作时的用户操作,在具体实现中还可以有其他的识别用户操作的方式。例如,电子设备100还可以通过红外传感器识别用户的在第一屏幕上的操作等,本申请实施例对此不作限定。
(2)如何截取屏幕内容。
请参考附图2D,图2D是本申请实施例提供的一种通过第一触控操作将目标分屏的显示内容截取为图片的用户界面。其中,本申请实施例中电子设备100还包括:图片切换控件204,图片切换控件204在截屏时用于显示截取用户界面或用户界面某一部分的截图,也可以用于点击分享图片。电子设备100可以检测到作用于图片切换控件307的触控操作(如在图片切换控件307上的拖拽操作),响应于该操作,电子设备100可以控制上述图像分享或插入至上述拖拽操作移动的终止位置所在的窗口内。
如图2D所示,响应于图2B对应的实施例中作用于触摸传感器180K的用户操作,电子设备100可以通过识别该用户操作是否为包含了多个触控点在用户界面内移动了大于或等于第一预设距离阈值的移动操作,判断出触摸传感器180K检测到的用户触控操作中多个触控点移动时的起始位置,以便确定该用户操作是为了截取所述目标分屏内的当前显示内容还是为了截取整个用户界面的当前显示内容,再根据触摸传感器180K检测到的多个触控点移动时的起始位置均在窗口显示区202内时,则根据触摸传感器180K检测到的用户操作对应的截取窗口显示区202内的当前显示内容为第一截图。
在一种可能实现的方式中,请参考附图2E,图2E是本申请实施例提供的另一种通过第一触控操作将目标分屏的显示内容截取为图片的用户界面。
如图2E所示,响应于图2B对应的实施例中作用于触摸传感器180K的手势操作,电子设备100可以通过识别该用户操作是否为包含了多个触控点在用户界面内移动了大于或等于第一预设距离阈值的移动操作,判断出触摸传感器180K检测到的用户触控操作中多个触控点移动时的起始位置,以便确定该用户操作是为了截取所述目标分屏内的当前显示内容还是为了截取整个用户界面的当前显示内容,再根据触摸传感器180K检测到的多个触控点移动时的起始位置均在窗口显示区202内,不仅仅根据与触摸传感器180K检测到的用户操作截取窗口显示区202内的当前显示内容为第一截图,还需要根据与触摸传感器180K检测到的用户操作截取用户界面中所有分屏极恶趣味第二截图,即将所述第一屏幕内所述N个分屏的当前显示内容分别截取为对应的截图,生成第二截图,该第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图均为准许分别接收触控操作的缩略图。可以理解的是,在检测到的多个触控点移动时的起始位置均在窗口显示区202时,不仅仅只需要 截取窗口显示区202内滴滴打车的当前显示内容,还需要分别截取窗口显示区201内微信的当前显示内容、窗口显示区202内滴滴打车的当前显示内容以及窗口显示区203内新闻推送的当前显示内容,需要说明的是,该第二截图内窗口显示区201内微信的当前显示内容对应的缩略图、窗口显示区202内滴滴打车的当前显示内容对应的缩略图以及窗口显示区203内新闻推送的当前显示内容对应的缩略图批次之间没有关联,可以分别接收用户对其的触控操作,因此,对窗口显示区202内滴滴打车的当前显示内容截图后,再对所有分屏进行截图,可以防止电子设备对用户操作意图的误判断,或者用户误操作导致的错误判断目标分屏,提高了截图的准确率和截图的用户体验。
(3)如何保存截图并分享。
请参考附图2F至附图2H,图2F至图2H是本申请实施例提供的一组确认并保存并分享截屏缩略图的用户界面。
如附图2F和附图2G所示,获取第一截图和第二截图后,电子设备100接收作用于触摸传感器180K的用户操作,电子设备100可以通过识别该用户操作是否为第二触控操作,例如:第二触控操作为对所述第一截图或所述第二截图内目标截屏缩略图的点击操作,其中,目标截屏缩略图为所述第一截图或所述第二截图内所述N个分屏对应的截屏缩略图中的至少一个,例如:目标截屏缩略图为窗口显示区202内滴滴打车的当前显示内容对应的缩略图。电子设备100根据所述第二触控操作,将所述目标截屏缩略图保存至图库,并删除所述第一截图或所述第二截图内除所述目标截屏缩略图外所有的截图。如图2H所示,接收对窗口显示区202内滴滴打车的当前显示内容对应的缩略图的拖拽操作,所述拖拽操作为通过一个触控点将窗口显示区202内滴滴打车的当前显示内容对应的缩略图进行移动的操作;根据该拖拽操作,控制窗口显示区202内滴滴打车的当前显示内容对应的缩略图分享或插入至窗口显示区201内微信的应用中。
在一种可能的实现方式中,请参考附图2I至附图2K,图2I至图2K是本申请实施例提供的另一组确认并保存截屏缩略图的用户界面。
如附图2I和附图2J所示,获取第一截图和第二截图后,电子设备100接收作用于触摸传感器180K的用户操作,电子设备100可以通过识别该用户操作是否为第二触控操作,例如:第二触控操作为对所述第一截图或所述第二截图内目标截屏缩略图的点击操作,其中,该目标截屏缩略图为多个。此时,电子设备100可以根据第二触控操作控制该多个截图拼接成一张截图,因为该多个截图拼接一起的图形形状不规则,所以电子设备可以自动补充缺失的部分,使得拼接后的截图的几何形状为规则的。如图2K所示,接收对拼接后的窗口显示区202内滴滴打车的当前显示内容对应的缩略图与窗口显示区201内微信的当前显示内容对应的缩略图生成的目标截屏缩略图的拖拽操作,所述拖拽操作为通过一个触控点将目标截屏缩略图进行移动的操作;根据该拖拽操作,控制目标截屏缩略图分享或插入至窗口显示区201内微信的应用中。
本申请实施例中,电子设备还可以从触摸传感器180K接收的多个触控操作中,确定第一触控操作,并判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内。若是,则将目标分屏内的当前显示内容截取为第一截图,因此,这种判断出多个触控点的起始位置都在同一个屏幕内后,可以在多个窗口中直接获得单独一个屏幕的 截图,可以实现灵活的快速获取目标分屏区域的截图的目的,使得用户能够在较短的时间内获得截图,同时不需要先将整个显示屏幕截取后,将获得的整个屏幕的截图放入图片编辑软件中手动操作去掉截屏图片中其他屏幕内显示的信息,简化了用户截图操作。
场景二:截取整个屏幕的应用场景(多个触控点移动时的起始位置不都在第一屏幕的目标分屏内)。
请参考附图3A,图3A是本申请实施例提供的另一种多窗口的用户界面示意图。该用户界面包括了QQ窗口301、滴滴打车窗口302以及新闻推送窗口303。不限于此,用户界面还可以包括其他更多或更少的窗口,其他应用程序例如可以是微信或者MSN等即时通讯软件;也可以是爱奇艺、腾讯视频、优酷视频等娱乐视频类软件;还可以是计算器、日历、设置等电子设备系统自带的工具辅助类应用。
如图3A所示,用户界面30可以包括:窗口显示区301、窗口显示区302、窗口显示区303、窗口尺寸控件305、状态栏306。本端用户使用的电子设备为电子设备100。其中:
窗口显示区301-303用于分别显示电子设备中不同的应用(或电子设备100)内容,且可接受用户对窗口显示区301-303内的触控操作并响应。
窗口尺寸控件305用于调整窗口在用户界面内所占的比例。电子设备100可以检测到作用于窗口尺寸控件306的触控操作(如在窗口尺寸控件306上的拖拽操作),响应于该操作,电子设备100可以将当前窗口302和303之间的尺寸大小进行调整。
状态栏306可以包括:运营商指示符(例如运营商的名称“中国移动”)、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符、移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符、时间指示符和电池状态指示符。
在该场景下,用户在使用电子设备的第一屏幕内分为三个分屏区域,窗口显示区301用于显示QQ主界面、窗口显示区303用于显示滴滴打车界面、窗口显示区303用于显示新闻推送界面。若用户需要将整个屏幕分享至QQ联系人,同时包含QQ和新闻推送界面时,除了现有技术中,组合按键截屏或者指关节控制截屏外,还可以通过第一触控操作来进行,其中,第一触控操作的所述多个触控点移动时的起始位置在多个分屏窗口内,如:同时处于QQ窗口和滴滴打车窗口。
基于前述应用场景,下面介绍电子设备100上实现的一些UI实施例。
需要将用户界面30(即,QQ窗口、滴滴打车窗口以及新闻推送窗口)内的显示内容截图分享至窗口显示区201(即,QQ窗口)时,电子设备100可以通过接收触控操作,确定第一触控操作;再根据确定的第一触控操作包含的多个触控点的起始位置,确定截取用户界面30获得第三截图;最后将上述第三截图保存后并分享。
下面从以下几个方面进行详细说明。
(1)如何确定截屏操作。
请参考附图3B,图3B是本申请实施例提供的另一种识别第一触控操作时的用户操作示意图。其中,图3B所示电子设备100还包括:图片切换控件304,图片切换控件304在截屏时用于显示电子设备100被截图的图像,也可以用于点击分享图片。电子设备100可以检测到作用于图片切换控件307的触控操作(如在图片切换控件307上的拖拽操作),响 应于该操作,电子设备100可以控制上述图像分享或插入至上述拖拽操作移动的终止位置所在的窗口内。
如图3B所示,电子设备100可以通过触摸传感器180K检测到用户的触控操作(如,触摸传感器180K识别出用户在窗口显示区202上多个触控点做出的移动操作),响应于该触控操作,电子设备100可以识别到该移动操作的运动轨迹。
具体实现中,当触摸传感器180K检测用户的触控操作时,不同的触控操作停留时长、移动的运动轨迹、触控点的数量,例如:三个触控点与四个触控点、多个触控点的起始位置和终止位置分别于分屏之间的位置关系等,可能产生不同的光学和/或声学效果,并生成对应的信号(包括该移动操作对电子设备产生的压力值等)。由不同的触控操作产生的信号可以通过电子设备100的传感器(例如:触摸传感器、加速度传感器、冲击传感器、振动传感器、声传感器、位移传感器、速度传感器等)捕获。其中,触摸传感器180K检测用户的触控操作的移动距离,判断该移动距离大小是否大于的第一预设距离阈值,若该移动距离大于第一预设距离阈值,则电子设备100可以确定该触控操作为第一触控操作。因而,电子设备100可通过捕获到的信号区分用户使用具体的触控操作,再通过判断触摸传感器180K检测到用户触控操作(即,多个触控点在窗口显示区202内移动了大于或等于第一预设距离阈值的移动操作)是否为第一触控操作;若确定触摸传感器180K检测到用户的触控操作为第一触控操作,则再判断第一触控操作中包含的多个触控点的起始位置是否均在窗口显示区202内,若是,则将窗口显示区202内的当前显示内容截取为第一截图。此时电子设备100可以通过震动和/或用户界面标识(如:触控操作中的运动轨迹发亮,边界加粗或有虚影等)来指示触摸传感器180K检测到用户触控操作为第一触控操作。
不限于上述列出的通过触摸传感器180K识别第一触控操作时的用户操作,在具体实现中还可以有其他的识别用户操作的方式。例如,电子设备100还可以通过红外传感器识别用户的在第一屏幕上的操作等,本申请实施例对此不作限定。
(2)如何确定截屏内容。
请参考附图3C和附图3D,图3C和图3D是本申请实施例提供的一组通过第一触控操作将第一屏幕内的显示内容截取为图片并分享的用户界面。
如图3C所示,电子设备100可以响应于图2B对应的实施例中作用于触摸传感器180K的用户操作,电子设备100可以通过识别该用户操作是否为包含了多个触控点在用户界面内移动了大于或等于第一预设距离阈值的移动操作,判断出触摸传感器180K检测到的用户触控操作中多个触控点移动时的起始位置,以便确定该用户操作是为了截取所述目标分屏内的当前显示内容还是为了截取整个用户界面的当前显示内容,再根据触摸传感器180K检测到的多个触控点移动时的起始位置不均在窗口显示区202内时,即多个触控点对应运动轨迹的起始位置在多个分屏内,则根据触摸传感器180K检测到的用户操作对应的截取用户界面20内的当前显示内容为第三截图。图3D是本申请实施例提供的一种确认并保存并分享截屏缩略图的用户界面。其中,保存并分享第三截图的方式可参考前述实施例以及下述方法实施例的相关描述,此处不赘述。
在一种可能的实现方式中,请参考附图3E,图3E是本申请实施例提供的另一组通过第一触控操作将第一屏幕内的显示内容截取为图片的用户界面。当用户使用大屏设备进行 多窗口操作时,如果触摸传感器180K检测到第一触控操作即三指下滑的起点位置跨过多窗口,电子设备100将对多窗口的内容进行截屏。例如,如图3E中第(1)张图所示起点位置如若跨过应用A和应用B,电子设备100将对第一屏幕内所有分屏的当前显示内容进行截图,即,对应用A和应用B进行截图。如图3E中第(2)张图所示获得整个屏幕截图后的第三截图。
本申请实施例中,电子设备首先判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内,若不是,即,第一触控操作中多个触控点移动时的起始位置在至少两个分屏的区域内时,则将上述第一屏幕内所有分屏的当前显示内容截取为第三截图,因此,若用户需要截取整张屏幕内的显示内容时,只需要将多个触控点移动时的起始位置处于至少两个分屏的区域内,就可以针对第一屏幕的截图,避免了繁琐的截屏操作、用户截屏时间较长的问题,提高了用户截屏体验。
场景三:截取长截屏的应用场景。
请参考附图4A,图4A是本申请实施例提供的又一种多窗口的用户界面示意图。该用户界面包括了微信窗口401、滴滴打车窗口402以及新闻推送窗口403。不限于此,用户界面还可以包括其他更多或更少的窗口,其他应用程序例如可以是QQ或者MSN等即时通讯软件;也可以是爱奇艺、腾讯视频、优酷视频等娱乐视频类软件;还可以是计算器、日历、设置等电子设备系统自带的工具辅助类应用。
如图4A所示,用户界面40可以包括:窗口显示区401、窗口显示区402、窗口显示区403、窗口尺寸控件404、状态栏405。本端用户使用的电子设备为电子设备100。其中:
窗口显示区401-403用于分别显示电子设备中不同的应用(或电子设备100)内容,且可接受用户对窗口显示区401-403内的触控操作并响应。
状态栏405可以包括:运营商指示符(例如运营商的名称“中国移动”)、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符、移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符、时间指示符和电池状态指示符。
在该场景下,用户在使用电子设备的第一屏幕内分为三个分屏区域,窗口显示区401用于显示微信主界面、窗口显示区402用于显示滴滴打车界面、窗口显示区403用于显示新闻推送界面。若用户需要将窗口显示区402用于显示滴滴打车界面的长截屏分享至微信联系人,且不包含微信和新闻推送界面时,可以实施以下三种操作方式。
基于前述应用场景,下面介绍电子设备100上实现的一些长截屏的UI实施例。
需要将窗口显示区402(即,滴滴打车窗口)内的长截图分享至窗口显示区401(即,微信窗口)时,电子设备100可以首先判断触发长截屏操作的用户操作是哪种,若多个触控点在目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度,第二预设距离阈值大于第一预设距离阈值,且第二预设距离阈值与目标分屏高度之间的比例大于预设比例阈值,则对目标分屏的分屏区域执行长截屏操作;或者,若在接收到第一触控操作之后的第二时间段内,再次接收到对目标分屏的第一触控操作,则对目标分屏的分屏区域执行长截屏操作;或者,若第一触控操作中包含四个触控点,则对目标分屏的分屏区域执行长截屏操作。
(1)当第一触控操作的多个触控点在目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度时,电子设备对该目标分屏执行长截屏操作。
请参考附图4B至附图4C,图4B和图4C是本申请实施例提供的一组长截屏的用户界面示意图。其中,电子设备10还包括窗口尺寸控件404,窗口尺寸控件404用于调整窗口在用户界面内所占的比例。电子设备100可以检测到作用于窗口尺寸控件406的触控操作(如在窗口尺寸控件406上的拖拽操作),响应于该操作,电子设备100可以将当前窗口401和402之间的尺寸大小进行调整。
当所述多个触控点在所述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度时,电子设备对该目标分屏执行长截屏操作,其中,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括与多个触控点在所述目标分屏内移动时移动距离超过第二预设阈值的部分成正比的目标分屏的下一页的显示内容,需要说明的是,所述第二预设距离阈值大于所述第一预设距离阈值,且所述第二预设距离阈值与所述目标分屏高度之间的比例大于预设比例阈值,其中,所述目标分屏的高度可以理解为,与所述多个触控点在所述目标分屏内移动时的移动方向一致的边长。其中,保存并分享第三截图的方式可参考前述实施例以及下述方法实施例的相关描述,此处不赘述。
(4)当在电子设备接收到所述第一触控操作之后的第二时间段内,再次接收到对所述目标分屏的所述第一触控操作时,执行长截屏操作。
请参考附图4D至附图4E,图4D和图4E是本申请实施例提供的另一组长截屏的用户界面示意图。
当电子设备100通过触摸传感器180K检测到第一触控操作之后的第二时间段内,再次通过触摸传感器180K检测到对目标分屏的第一触控操作时,电子设备100对该目标分屏执行长截屏操作,其中,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括在第二时间段内根据连续接收到的第一触控操作的次数,判断其长截屏内的内容。例如,在第二时间段内每多接受一次第一触控操作,即可在当前显示内容上多增加该目标分屏的下一页显示内容。可以理解的,在第二时间段内,接收到的第一触控操作的次数越多,长截屏所包括的目标分屏的显示内容也越多,直至该目标分屏的全部显示内容被截取。其中,保存并分享第三截图的方式可参考前述实施例以及下述方法实施例的相关描述,此处不赘述。
(3)当第一触控操作中包含四个触控点时,电子设备对目标分屏执行长截屏操作。
请参考附图4F至附图4G,图4F和图4G是本申请实施例提供的一种多窗口的用户界面示意图。
当电子设备100通过触摸传感器180K检测到第一触控操作中包含四个触控点时,电子设备对该目标分屏执行长截屏操作,其中,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括四个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作后,四个触控点在该移动操作的终止位置所停留的时间成正比的目标分屏的下一页的显示内容;或者,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括四个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作后,四个触控点在该移动操作的终止位置对应多个压力值成正比的目标分屏的下一页的显示内容;例如:用户进 行四指下滑时,在终止位置停留的时间越长,最终获得的长截图对应的显示内容就越多;又例如:用户进行四指下滑时,在终止位置对应的压力值越大,最终获得的长截图对应的显示内容就越多。其中,保存并分享第三截图的方式可参考前述实施例以及下述方法实施例的相关描述,此处不赘述。
需要说明的是,本申请的电子设备可以是智能终端、车载智能终端、智能电视、可穿戴设备等可以在显示区域多窗口显示的可触控的智能终端,例如:有一个显示屏的智能终端,还可以是可触控的双面屏终端,折叠屏终端、触控电脑、平板、触控电视、全面屏手机等等。
可以理解的是,通过本申请实施例,可以根据不同的操作方式触发长截屏指令,执行对目标分屏的长截屏操作,多种触控方式进行长截屏操作,避免了繁琐的截屏操作、用户截屏时间较长的问题,提升了用户截屏体验。
可以理解的是,上述三种应用场景的只是本申请实施例中的几种示例性的实施方式,本申请实施例中的应用场景包括但不仅限于以上应用场景。
基于前述图2A-图4G提供的五种场景及各个场景下的UI实施例,接下来介绍本申请实施例提供一种屏幕截取方法,该方法可应用于上述图1A中所述的电子设备中。请参见附图5A和附图5B,图5A是本申请实施例提供的一种屏幕截取方法的流程示意图,图5B是本申请实施例提供的另一种屏幕截取方法的流程示意图。下面以电子设备为执行主体展开描述。该方法可以包括以下步骤S501-步骤S503,如图5A所示;还可以包括步骤S504-步骤S508,如图5B所示。
步骤S501:确定第一触控操作。
具体的,电子设备确定第一触控操作,所述第一触控操作为多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,所述第一屏幕包括N个分屏,其中,N为大于1的正整数。可以理解的是,电子设备可以接收多个触控操作,从多个触控操作中确定第一触控操作,即,多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,其中,该多个触控点至少是两个以上的触控点,而且该第一预设距离阈值需要达到第一屏幕内包括的N个分屏中最小分屏的最短边长预设比例,可以理解的是,若第一预设阈值过于短小,则很可能会导致大概率的误触操作;又因为第一屏幕内包含的N个分屏窗口中的每一个分屏窗口之间彼此的尺寸大小并不相同,故因此,第一预设距离阈值需要达到第一屏幕内包括的N个分屏中最小分屏的最短边长预设比例防止无法触发截屏操作,即第一触控操作,例如:第一预设距离阈值是最小分屏的最短边长的三分之一。
步骤S502:判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内。
具体的,电子设备判断接收到的第一触控操作中所述多个触控点移动时的起始位置是否均在所述第一屏幕的目标分屏内,其中,所述目标分屏为所述N个分屏中的任意一个。其中,需要说明的是,第一触控操作中的多个触控点,可以是用户通过手指触控,也可以是用户借助外部的触控工具进行操作,其中,触控工具包括但不限于触控笔、触控手套等。还需要说明的是,触控点的个数可以是两个或两个以上的触控点。可选的,所述多个触控 点在移动时,可以是多个触控点同时进行。
在一种可能实现的方式中,若所述多个触控点在所述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度,所述第二预设距离阈值大于所述第一预设距离阈值,且所述第二预设距离阈值与所述目标分屏高度之间的比例大于预设比例阈值,则对所述目标分屏的分屏区域执行长截屏操作;或者,若在接收到所述第一触控操作之后的第二时间段内,再次接收到对所述目标分屏的所述第一触控操作,则对所述目标分屏的分屏区域执行所述长截屏操作;或者,若所述第一触控操作中包含四个触控点,则对所述目标分屏的分屏区域执行所述长截屏操作。例如:本申请实施例同时支持用户对单一窗口进行长截屏操作。如果用户使用三指下滑截屏手势进行截屏操作,本申请实施例将通过判断一定时间内用户连续三指下滑的次数判断其是否为长截屏的意图;或者将通过判断用户截屏手势(三指下滑)的结束位置即三指离开屏幕的位置判断用户是否需要长截屏;或者当用户进行四指下滑时也可触发长截屏操作。
可以理解的是,当所述多个触控点在所述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度时,电子设备对该目标分屏执行长截屏操作,其中,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括与多个触控点在所述目标分屏内移动时移动距离超过第二预设阈值的部分成正比的目标分屏的下一页的显示内容,请参考附图5C,图5C是本申请实施例提供的一种基于前述电子设备100长截屏的操作示意图。需要说明的是,所述第二预设距离阈值大于所述第一预设距离阈值,且所述第二预设距离阈值与所述目标分屏高度之间的比例大于预设比例阈值,其中,所述目标分屏的高度可以理解为,与所述多个触控点在所述目标分屏内移动时的移动方向一致的边长。例如:若多个触控点在所述目标分屏内移动时的起始位置在目标分屏的顶部,终止位置在目标分屏的底部,则对目标分屏进行长截屏操作。
还可以理解的是,当在接收到所述第一触控操作之后的第二时间段内,再次接收到对所述目标分屏的所述第一触控操作时,电子设备对该目标分屏执行长截屏操作,其中,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括在第二时间段内根据连续接收到的第一触控操作的次数,判断其长截屏内的内容。例如,在第二时间段内每多接受一次第一触控操作,即可在当前显示内容上多增加该目标分屏的下一页显示内容。可以理解的,在第二时间段内,接收到的第一触控操作的次数越多,长截屏所包括的目标分屏的显示内容也越多,直至该目标分屏的全部显示内容被截取。
还可以理解的是,当所述第一触控操作中包含四个触控点时,电子设备对该目标分屏执行长截屏操作,其中,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括四个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作后,四个触控点在该移动操作的终止位置所停留的时间成正比的目标分屏的下一页的显示内容;或者,该长截屏的截图范围可以除目标分屏的当前显示内容外,还包括四个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作后,四个触控点在该移动操作的终止位置对应多个压力值成正比的目标分屏的下一页的显示内容;例如:用户进行四指下滑时,在终止位置停留的时间越长,最终获得的长截图对应的显示内容就越多;又例如:用户进行四指下滑时,在终止位置对应的压力值越大,最终获得的长截图对应的显示内容就越多。
步骤S503:若多个触控点移动时的起始位置均在第一屏幕的目标分屏内,将目标分屏内的当前显示内容截取为第一截图。
具体的,电子设备若确定第一触控操作中多个触控点移动时的起始位置均在第一屏幕的目标分屏内,将目标分屏内的当前显示内容截取为第一截图。请参考附图5D,图5D是本申请实施例中提供的一组截图单个屏幕并分享至当前分屏中的用户界面示意图,其中,图5D中的第(1)张图片描述了在目标窗口(即,应用B)中的三指下滑操作(即,第一触控操作);第(2)张图片描述了在目标窗口中三指下滑时,仅截取该目标窗口的当前界面获得了第一截图,其中,该第一截图可以接收触控操作(如,拖拽操作),该第一截图还可以是截图缩略图;第(3)张图片描述了第一截图接收了拖拽操作后被分享至应用C。第(4)张图片描述了前三张图片所述的方法流程,即,例如:当用户使用折叠屏设备进行多窗口操作时,如果用户使用三指下滑截屏手势进行截屏操作,本申请将通过判断用户截屏手势(三指下滑)的起始位置,提前判断用户的意图。如果三指下滑起点落在单一窗口的顶部,将对该窗口进行截屏操作。这种直接根据多个触控点的起始位置是否都单独在同一个屏幕内,来获得单独一个屏幕的截图,可以灵活的在多个窗口任务下,快速获取目标分屏区域的截图,使得用户能够在较短的时间内获得截图,同时不需要在图片编辑软件中进行二次编辑,简化了用户截图操作。
可选的,电子设备若确定所述多个触控点移动时的起始位置不均在所述第一屏幕的目标分屏内,将所述第一屏幕内的当前显示内容截取为第三截图并保存至所述图库。请参考附图5E,图5E是本申请实施例中提供的一组截图整个屏幕并分享至当前分屏中的用户界面示意图,其中,图5E中的第(1)张图片描述了用户在至少两个窗口(即,应用A和应用B)内进行三指下滑操作(即,第一触控操作);第(2)张图片描述了在至少两个窗口中三指下滑后,截取了该第一屏幕的当前显示界面的显示内容获得了第三截图;第(3)张图片描述了应用C对应的截图缩略图接收了拖拽操作后被分享至应用A,其中,拖拽操作的终止位置为应用A所在的分屏。
步骤S504:若多个触控点移动时的起始位置均在第一屏幕的目标分屏内,将第一屏幕内N个分屏的当前显示内容分别截取为对应的截图,并生成第二截图。
具体的,若电子设备确定所述多个触控点移动时的起始位置均在所述第一屏幕的目标分屏内,所述方法还包括:电子设备生成的所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图均为准许分别接收触控操作的缩略图。可以理解的,电子设备若确定第一触控操作中多个触控点移动时的起始位置均在第一屏幕的目标分屏内,不仅仅将目标分屏内的当前显示内容截取为第一截图,还可以将所述第一屏幕内所述N个分屏的当前显示内容分别截取为对应的截图;生成第二截图,所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图均为准许分别接收触控操作的缩略图。请参考附图5F,图5F是本申请实施例中提供的另一组截图单个屏幕并分享至当前分屏中的用户界面示意图,其中,图5F中的第(1)张图片描述了在目标窗口(即,应用B)中的三指下滑操作(即,第一触控操作);第(2)张图片描述了在目标窗口中三指下滑时,截取了该第一屏幕的当前显示界面的所有 分屏的显示内容获得了第二截图,其中,该第二截图包括了应用A对应的截图缩略图、应用B对应的截图缩略图、应用C对应的截图缩略图,其中,这三张截图缩略图可以分别独立接收触控操作(如,拖拽操作),如第(3)张图片描述了应用C对应的截图缩略图接收了拖拽操作;第(4)张图片描述了应用C对应的截图缩略图接收了拖拽操作后被分享至应用B。需要说明的是,第一截图和第二截图悬浮在第一屏幕上,且第一截图与第二截图之间没有交集。
可选的,接收指关节截屏操作或者按键组合按压截屏操作;根据所述指关节截屏操作指令或者所述按键组合按压截屏操作指令,将所述第一屏幕内所有分屏的当前显示内容分别截取为对应的截图;生成第二截图,所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图中的每一个截图均为准许分别接收触控操作的缩略图。请参考附图5G,图5G是本申请实施例中提供的又一组截图单个屏幕并分享至当前分屏中的用户界面示意图,其中,图5G中的第(1)张图片描述了电子设备在接收了指关节截屏操作或者按键组合按压截屏操作;第(2)张图片描述了电子设备在接收了指关节截屏操作或者按键组合按压截屏操作后,截取了该第一屏幕的当前显示界面的所有分屏的显示内容获得了第二截图,其中,该第二截图包括了应用A对应的截图缩略图、应用B对应的截图缩略图、应用C对应的截图缩略图,其中,这三张截图缩略图可以分别独立接收触控操作(如,拖拽操作),如第(3)张图片描述了应用C对应的截图缩略图接收了拖拽操作;第(4)张图片描述了应用C对应的截图缩略图接收了拖拽操作后被分享至应用B。
步骤S505:接收第二触控操作。
具体的,所述第二触控操作为对所述第一截图或所述第二截图内目标截屏缩略图的点击操作,所述目标截屏缩略图为所述第一截图或所述第二截图内所述N个分屏对应的截屏缩略图中的至少一个。需要说明的是,第二触控操作中所包含的目标截屏缩略图可能是第一截图或第二截图内所有N个分屏对应的截屏缩略图中的多个,因此,该多个截图可以个根据第二触控操作拼接成一张截图,若该多个截图拼接一起的图形形状不规则,则电子设备则可以自动补充缺失的部分,使得拼接后的图形的几何形状为规则的。
步骤S506:根据第二触控操作,将目标截屏缩略图保存至图库,并删除第一截图或第二截图内除目标截屏缩略图外所有的截图。
可选的,若在接收到所述第一触控操作、所述指关节截屏操作指令或者所述按键组合按压截屏操作指令中的任何一个操作之后的第一时间段内,未接收到所述第二触控操作,则将所述第二截图内的所有截屏拼接成一张图,并与所述第一截图保存至所述图库。请参考附图5H,图5H是本申请实施例中提供的另一组截取整个屏幕并分享至当前分屏中的用户界面示意图,其中,图5H中的第(1)张图片描述了电子设备在接收了指关节截屏操作或者按键组合按压截屏操作;第(2)张图片描述了电子设备在接收了指关节截屏操作或者按键组合按压截屏操作后的第一时间段内,因未接收到所述第二触控操作,所以将所述第二截图内的所有截屏拼接成一张截图,该截图包括了应用A对应的截图缩略图、应用B对应的截图缩略图、应用C对应的截图缩略图;第(3)张图片描述了该拼接后的截图缩略图接收了拖拽操作;第(4)张图片描述了拼接后的截图缩略图接收了拖拽操作后被分享至应 用C。第(5)张图片描述了前四张图片所述的方法流程,即,例如:当用户使用折叠屏设备进行多窗口操作时,如果用户使用指关节截屏或者按键组合按压截屏时,现行截屏方案会将当前屏幕的所有窗口进行截屏操作。本申请将所有窗口完成截屏操作后的缩略图以拼图的方式呈现在屏幕的一角,用户稍后可对截屏进行按需选择,在一定时间内用户没有进行选择时,多窗口截屏的缩略图将拼接成一整张缩略图,所有截图将拼接成一张图保存至图库。
步骤S507:接收对目标截屏缩略图的拖拽操作。
具体的,所述拖拽操作为通过至少一个触控点将所述目标截屏缩略图进行移动的操作。当用户使用折叠屏设备进行多窗口操作,想要在应用C内插入应用B的截图时,用户只需在应用B内,从窗口顶端开始进行三指下滑,即可完成对应用B界面的截屏操作。对应用B的截屏缩略图进行拖拽并在应用C窗口上投放该缩略图时,即可在应用C内插入该截图,完成内容分享。例如:上述附图5D中的第(2)张图片所描述的操作、上述附图5E中的第(2)张图片所描述的操作、上述附图5F中的第(3)张图片所描述的操作、上述附图5G中的第(3)张图片所描述的操作以及上述附图5H中的第(3)张图片所描述的操作。
步骤S508:根据拖拽操作,控制目标截屏缩略图分享或插入至拖拽操作移动的终止位置所在的分屏区域内。
具体的,电子设备可以根据拖拽操作,控制所述目标截屏缩略图分享或插入至所述拖拽操作移动的终止位置所在的分屏区域内。例如:上述附图5D中的第(3)张图片所描述的情形、上述附图5E中的第(3)张图片所描述的情形、上述附图5F中的第(4)张图片所描述的情形、上述附图5G中的第(4)张图片所描述的情形以及上述附图5H中的第(4)张图片所描述的情形。请参考附图5I,图5I是本申请实施例中提供的一组实际应用中的截取单个屏幕并分享至当前界面包含的应用中的用户界面示意图,其中,图5I中的第(1)张图片描述了在图库所在分屏中的三指下滑操作;第(2)张图片描述了在图库所在分屏中三指下滑后,截取了该图库的显示内容获得了第一截图,并对第一截图进行了拖拽并分享至电话联系人所在的分屏中,即可在电话联系人内插入该截图。
在本申请实施例中,电子设备可以从接收的多个触控操作中,确定第一触控操作,并判断第一触控操作中多个触控点移动时的起始位置是否均在第一屏幕的目标分屏内。若是,则将目标分屏内的当前显示内容截取为第一截图,并截取第一屏幕内的当前显示内容生成第二截图;若否,则将第一屏幕内的当前显示内容截取为第三截图。因此,这种判断出多个触控点的起始位置都在同一个屏幕内后,可以在多个窗口中直接获得单独一个屏幕的截图,可以实现灵活的快速获取目标分屏区域的截图的目的,使得用户能够在较短的时间内获得截图,同时不需要先将整个显示屏幕截取后,将获得的整个屏幕的截图放入图片编辑软件中手动操作去掉截屏图片中其他屏幕内显示的信息,简化了用户截图操作。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为 依据本申请,某些步骤可能可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以为个人计算机、服务端或者网络设备等,具体可以是计算机设备中的处理器)执行本申请各个实施例上述方法的全部或部分步骤。其中,而前述的存储介质可包括:U盘、移动硬盘、磁碟、光盘、只读存储器(Read-OnlyMemory,缩写:ROM)或者随机存取存储器(RandomAccessMemory,缩写:RAM)等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (18)

  1. 一种屏幕截取方法,其特征在于,包括:
    确定第一触控操作,所述第一触控操作为多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,所述第一屏幕包括N个分屏,其中,N为大于1的正整数;
    判断所述第一触控操作中所述多个触控点移动时的起始位置是否均在所述第一屏幕的目标分屏内,所述目标分屏为所述N个分屏中的任意一个;
    若所述多个触控点移动时的起始位置均在所述第一屏幕的目标分屏内,将所述目标分屏内的当前显示内容截取为第一截图。
  2. 根据权利要求1所述的方法,其特征在于,所述第一截图为缩略图;若所述多个触控点移动时的起始位置均在所述第一屏幕的目标分屏内,所述方法还包括:
    将所述第一屏幕内所述N个分屏的当前显示内容分别截取为对应的截图;
    生成第二截图,所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图均为准许分别接收触控操作的缩略图。
  3. 根据权利要求1所述的方法,其特征在于,所述第一截图为缩略图;所述方法还包括:
    接收指关节截屏操作或者按键组合按压截屏操作;
    根据所述指关节截屏操作指令或者所述按键组合按压截屏操作指令,将所述第一屏幕内所有分屏的当前显示内容分别截取为对应的截图;
    生成第二截图,所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图中的每一个截图均为准许分别接收触控操作的缩略图。
  4. 根据权利要求2或3所述的方法,其特征在于,所述方法还包括:
    接收第二触控操作,所述第二触控操作为对所述第一截图或所述第二截图内目标截屏缩略图的点击操作,所述目标截屏缩略图为所述第一截图或所述第二截图内所述N个分屏对应的截屏缩略图中的至少一个;
    根据所述第二触控操作,将所述目标截屏缩略图保存至图库,并删除所述第一截图或所述第二截图内除所述目标截屏缩略图外所有的截图。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    接收对所述目标截屏缩略图的拖拽操作,所述拖拽操作为通过至少一个触控点将所述目标截屏缩略图进行移动的操作;
    根据所述拖拽操作,控制所述目标截屏缩略图分享或插入至所述拖拽操作移动的终止位置所在的分屏区域内。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    若在接收到所述第一触控操作、所述指关节截屏操作指令或者所述按键组合按压截屏操作指令中的任何一个操作之后的第一时间段内,未接收到所述第二触控操作,则将所述第二截图内的所有截屏拼接成一张图,并与所述第一截图保存至所述图库。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述多个触控点移动时的起始位置不均在所述第一屏幕的目标分屏内,将所述第一屏幕内的当前显示内容截取为第三截图并保存至所述图库。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若所述多个触控点在所述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度,所述第二预设距离阈值大于所述第一预设距离阈值,且所述第二预设距离阈值与所述目标分屏高度之间的比例大于预设比例阈值,则对所述目标分屏的分屏区域执行长截屏操作;或者,
    若在接收到所述第一触控操作之后的第二时间段内,再次接收到对所述目标分屏的所述第一触控操作,则对所述目标分屏的分屏区域执行所述长截屏操作;或者,
    若所述第一触控操作中包含四个触控点,则对所述目标分屏的分屏区域执行所述长截屏操作。
  9. 一种电子设备,其特征在于,包括:一个或多个处理器、存储器、一个或多个按键;
    所述存储器、所述显示屏、所述一个或多个按键与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器执行所述计算机指令以执行:
    确定第一触控操作,所述第一触控操作为多个触控点在第一屏幕内移动了大于或等于第一预设距离阈值的移动操作,所述第一屏幕包括N个分屏,其中,N为大于1的正整数;
    判断所述第一触控操作中所述多个触控点移动时的起始位置是否均在所述第一屏幕的目标分屏内,所述目标分屏为所述N个分屏中的任意一个;
    若所述多个触控点移动时的起始位置均在所述第一屏幕的目标分屏内,将所述目标分屏内的当前显示内容截取为第一截图。
  10. 根据权利要求9所述的电子设备,其特征在于,所述第一截图为缩略图;所述处理器还用于:
    若所述多个触控点移动时的起始位置均在所述第一屏幕的目标分屏内,将所述第一屏幕内所述N个分屏的当前显示内容分别截取为对应的截图;
    生成第二截图,所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图均为准许分别接收触控操作的缩略图。
  11. 根据权利要求9所述的电子设备,其特征在于,所述第一截图为缩略图;所述处理器还用于:
    接收指关节截屏操作或者按键组合按压截屏操作;
    根据所述指关节截屏操作指令或者所述按键组合按压截屏操作指令,将所述第一屏幕内所有分屏的当前显示内容分别截取为对应的截图;
    生成第二截图,所述第二截图中包括所述N个分屏对应的截图,所述截图以对应的分屏在所述第一屏幕内的分布方式在所述第二截图内排列,且所述N个分屏对应的截图中的每一个截图均为准许分别接收触控操作的缩略图。
  12. 根据权利要求10或11所述的电子设备,其特征在于,所述处理器还用于:
    接收第二触控操作,所述第二触控操作为对所述第一截图或所述第二截图内目标截屏缩略图的点击操作,所述目标截屏缩略图为所述第一截图或所述第二截图内所述N个分屏对应的截屏缩略图中的至少一个;
    根据所述第二触控操作,将所述目标截屏缩略图保存至图库,并删除所述第一截图或所述第二截图内除所述目标截屏缩略图外所有的截图。
  13. 根据权利要求12所述的电子设备,其特征在于,所述处理器还用于:
    接收对所述目标截屏缩略图的拖拽操作,所述拖拽操作为通过至少一个触控点将所述目标截屏缩略图进行移动的操作;
    根据所述拖拽操作,控制所述目标截屏缩略图分享或插入至所述拖拽操作移动的终止位置所在的分屏区域内。
  14. 根据权利要求12所述的电子设备,其特征在于,所述处理器还用于:
    若在接收到所述第一触控操作、所述指关节截屏操作指令或者所述按键组合按压截屏操作指令中的任何一个操作之后的第一时间段内,未接收到所述第二触控操作,则将所述第二截图内的所有截屏拼接成一张图,并与所述第一截图保存至所述图库。
  15. 根据权利要求9所述的电子设备,其特征在于,所述处理器还用于:
    若所述多个触控点移动时的起始位置不均在所述第一屏幕的目标分屏内,将所述第一屏幕内的当前显示内容截取为第三截图并保存至所述图库。
  16. 根据权利要求9所述的电子设备,其特征在于,所述处理器还用于:
    若所述多个触控点在所述目标分屏内移动时移动了大于或等于第二预设距离阈值的距离长度,所述第二预设距离阈值大于所述第一预设距离阈值,且所述第二预设距离阈值与所述目标分屏高度之间的比例大于预设比例阈值,则对所述目标分屏的分屏区域执行长截屏操作;或者,
    若在接收到所述第一触控操作之后的第二时间段内,再次接收到对所述目标分屏的所 述第一触控操作,则对所述目标分屏的分屏区域执行所述长截屏操作;或者,
    若所述第一触控操作中包含四个触控点,则对所述目标分屏的分屏区域执行所述长截屏操作。
  17. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述权利要求1-9中任意一项所述的方法。
  18. 一种计算机程序,其特征在于,所述计算机程序包括指令,当所述计算机程序被计算机执行时,使得所述计算机执行如权利要求1-9中任意一项所述的方法。
PCT/CN2020/113053 2019-09-06 2020-09-02 一种屏幕截取方法及相关设备 WO2021043171A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP20861361.2A EP4024188A4 (en) 2019-09-06 2020-09-02 SCREEN CAPTURE METHOD AND ASSOCIATED DEVICE
US17/640,486 US11922005B2 (en) 2019-09-06 2020-09-02 Screen capture method and related device
JP2022514716A JP7385008B2 (ja) 2019-09-06 2020-09-02 スクリーン取り込み方法及び関連するデバイス
CN202080059571.6A CN114270302A (zh) 2019-09-06 2020-09-02 一种屏幕截取方法及相关设备
JP2023191427A JP2024020334A (ja) 2019-09-06 2023-11-09 スクリーン取り込み方法及び関連するデバイス

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910846710.9A CN110737386A (zh) 2019-09-06 2019-09-06 一种屏幕截取方法及相关设备
CN201910846710.9 2019-09-06

Publications (1)

Publication Number Publication Date
WO2021043171A1 true WO2021043171A1 (zh) 2021-03-11

Family

ID=69267509

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113053 WO2021043171A1 (zh) 2019-09-06 2020-09-02 一种屏幕截取方法及相关设备

Country Status (5)

Country Link
US (1) US11922005B2 (zh)
EP (1) EP4024188A4 (zh)
JP (2) JP7385008B2 (zh)
CN (2) CN110737386A (zh)
WO (1) WO2021043171A1 (zh)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737386A (zh) 2019-09-06 2020-01-31 华为技术有限公司 一种屏幕截取方法及相关设备
CN111338543B (zh) * 2020-02-24 2021-12-24 联想(北京)有限公司 一种屏幕截取方法和电子设备
CN113448658A (zh) * 2020-03-24 2021-09-28 华为技术有限公司 截屏处理的方法、图形用户接口及终端
CN111399735B (zh) * 2020-04-16 2022-04-12 Oppo广东移动通信有限公司 一种截屏方法、截屏装置、电子设备和存储介质
CN111625311B (zh) * 2020-05-18 2023-05-26 Oppo(重庆)智能科技有限公司 控制方法、装置、电子设备和存储介质
CN113821130A (zh) * 2020-06-20 2021-12-21 华为技术有限公司 一种确定截图区域的方法及相关装置
KR20220016727A (ko) * 2020-08-03 2022-02-10 삼성전자주식회사 다중 윈도우 이미지 캡쳐 방법 및 이를 위한 전자 장치
KR20220017065A (ko) * 2020-08-04 2022-02-11 삼성전자주식회사 캡쳐 기능 제공 방법 및 그 전자 장치
CN114077365A (zh) * 2020-08-21 2022-02-22 荣耀终端有限公司 分屏显示方法和电子设备
CN112150361A (zh) * 2020-09-17 2020-12-29 珠海格力电器股份有限公司 一种滚动截屏方法、装置、设备及介质
CN112783406B (zh) * 2021-01-26 2023-02-03 维沃移动通信有限公司 操作执行方法、装置和电子设备
CN114911400A (zh) * 2021-02-08 2022-08-16 花瓣云科技有限公司 分享图片的方法和电子设备
CN115145457A (zh) * 2021-03-31 2022-10-04 华为技术有限公司 一种滚动截屏的方法及装置
CN113253905B (zh) * 2021-05-26 2022-10-25 青岛海信移动通信技术股份有限公司 基于多指操作的触控方法及智能终端
CN113419655A (zh) * 2021-05-28 2021-09-21 广州三星通信技术研究有限公司 用于电子终端的应用截屏方法和装置
CN113535301A (zh) * 2021-07-13 2021-10-22 深圳传音控股股份有限公司 截屏方法、移动终端及存储介质
CN113778279A (zh) * 2021-08-31 2021-12-10 维沃移动通信有限公司 截图方法、装置及电子设备
CN113900615A (zh) * 2021-09-17 2022-01-07 北京鲸鲮信息系统技术有限公司 分屏模式下的数据分享方法、装置、电子设备和存储介质
CN114610435A (zh) * 2022-03-29 2022-06-10 联想(北京)有限公司 一种处理方法和电子设备
CN114779976A (zh) * 2022-04-24 2022-07-22 Oppo广东移动通信有限公司 截屏方法、装置、电子设备及计算机可读介质
CN114840128A (zh) * 2022-05-16 2022-08-02 Oppo广东移动通信有限公司 信息分享方法、装置、电子设备及计算机可读介质
CN114860135A (zh) * 2022-05-23 2022-08-05 维沃移动通信有限公司 截图方法和装置
CN115933954A (zh) * 2022-07-25 2023-04-07 Oppo广东移动通信有限公司 截屏方法、装置、电子设备以及存储介质
CN117666877A (zh) * 2022-08-26 2024-03-08 Oppo广东移动通信有限公司 截屏方法、装置、存储介质以及终端
CN117762889B (zh) * 2024-02-20 2024-04-19 成都融见软件科技有限公司 同文件多窗口状态同步方法、电子设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150056346A (ko) * 2013-11-15 2015-05-26 엘지전자 주식회사 다중 창 화면 캡쳐 처리를 위한 단말기
CN107896279A (zh) * 2017-11-16 2018-04-10 维沃移动通信有限公司 一种移动终端的截屏处理方法、装置及移动终端
CN108984091A (zh) * 2018-06-27 2018-12-11 Oppo广东移动通信有限公司 截屏方法、装置、存储介质及电子设备
CN109240577A (zh) * 2018-09-25 2019-01-18 维沃移动通信有限公司 一种截屏方法及终端
CN109388304A (zh) * 2018-09-28 2019-02-26 维沃移动通信有限公司 一种截屏方法及终端设备
CN110032418A (zh) * 2019-04-16 2019-07-19 珠海格力电器股份有限公司 一种截图方法、系统、终端设备及计算机可读存储介质
CN110737386A (zh) * 2019-09-06 2020-01-31 华为技术有限公司 一种屏幕截取方法及相关设备

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227457A1 (en) 2012-02-24 2013-08-29 Samsung Electronics Co. Ltd. Method and device for generating captured image for display windows
JP6080667B2 (ja) 2013-04-18 2017-02-15 三菱電機株式会社 映像信号処理装置及び方法、並びにプログラム及び記録媒体
KR102088911B1 (ko) * 2013-04-18 2020-03-13 엘지전자 주식회사 이동 단말기 및 그것의 제어 방법
CN104808973A (zh) * 2014-01-24 2015-07-29 阿里巴巴集团控股有限公司 截屏系统和实现截屏的方法
US9258480B2 (en) * 2014-03-31 2016-02-09 Facebook, Inc. Techniques to selectively capture visual media using a single interface element
CN105278824B (zh) * 2014-07-31 2018-06-26 维沃移动通信有限公司 一种终端设备的截屏方法及其终端设备
US10466878B2 (en) * 2014-09-04 2019-11-05 Huawei Technologies Co., Ltd. Screen capturing method and apparatus
WO2017118391A1 (zh) * 2016-01-05 2017-07-13 腾讯科技(深圳)有限公司 屏幕截图方法及屏幕截图装置
US10324828B2 (en) * 2016-03-28 2019-06-18 Dropbox, Inc. Generating annotated screenshots based on automated tests
US10084970B2 (en) * 2016-12-05 2018-09-25 International Institute Of Information Technology, Hyderabad System and method for automatically generating split screen for a video of a dynamic scene
US10783320B2 (en) * 2017-05-16 2020-09-22 Apple Inc. Device, method, and graphical user interface for editing screenshot images
CN107301013A (zh) * 2017-06-21 2017-10-27 深圳天珑无线科技有限公司 终端截屏方法及装置
CN109426420A (zh) * 2017-08-24 2019-03-05 西安中兴新软件有限责任公司 一种双屏终端图片发送方法和装置
US20190079663A1 (en) * 2017-09-14 2019-03-14 Samsung Electronics Co., Ltd. Screenshot method and screenshot apparatus for an electronic terminal
US20190114065A1 (en) * 2017-10-17 2019-04-18 Getac Technology Corporation Method for creating partial screenshot
CN107977144B (zh) * 2017-12-15 2020-05-12 维沃移动通信有限公司 一种截屏处理方法及移动终端
CN108632676B (zh) * 2018-05-11 2022-02-22 腾讯科技(深圳)有限公司 图像的显示方法、装置、存储介质及电子装置
CN109358791A (zh) * 2018-09-10 2019-02-19 珠海格力电器股份有限公司 一种截图方法、装置、存储介质及移动终端
CN109683761B (zh) * 2018-12-17 2021-07-23 北京小米移动软件有限公司 内容收藏方法、装置及存储介质
CN109976655B (zh) * 2019-03-22 2024-03-26 Oppo广东移动通信有限公司 长截屏方法、装置、终端及存储介质
CN110222212B (zh) * 2019-04-25 2021-07-20 南京维沃软件技术有限公司 一种显示控制方法及终端设备
CN110096326B (zh) * 2019-04-30 2021-08-17 维沃移动通信有限公司 一种截屏方法、终端设备及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150056346A (ko) * 2013-11-15 2015-05-26 엘지전자 주식회사 다중 창 화면 캡쳐 처리를 위한 단말기
CN107896279A (zh) * 2017-11-16 2018-04-10 维沃移动通信有限公司 一种移动终端的截屏处理方法、装置及移动终端
CN108984091A (zh) * 2018-06-27 2018-12-11 Oppo广东移动通信有限公司 截屏方法、装置、存储介质及电子设备
CN109240577A (zh) * 2018-09-25 2019-01-18 维沃移动通信有限公司 一种截屏方法及终端
CN109388304A (zh) * 2018-09-28 2019-02-26 维沃移动通信有限公司 一种截屏方法及终端设备
CN110032418A (zh) * 2019-04-16 2019-07-19 珠海格力电器股份有限公司 一种截图方法、系统、终端设备及计算机可读存储介质
CN110737386A (zh) * 2019-09-06 2020-01-31 华为技术有限公司 一种屏幕截取方法及相关设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4024188A4

Also Published As

Publication number Publication date
JP2024020334A (ja) 2024-02-14
JP7385008B2 (ja) 2023-11-21
US11922005B2 (en) 2024-03-05
EP4024188A1 (en) 2022-07-06
US20220334697A1 (en) 2022-10-20
EP4024188A4 (en) 2022-11-09
CN110737386A (zh) 2020-01-31
CN114270302A (zh) 2022-04-01
JP2022547892A (ja) 2022-11-16

Similar Documents

Publication Publication Date Title
WO2021043171A1 (zh) 一种屏幕截取方法及相关设备
WO2021000803A1 (zh) 一种控制屏幕小窗口的方法及相关设备
WO2021013158A1 (zh) 显示方法及相关装置
WO2021129326A1 (zh) 一种屏幕显示方法及电子设备
WO2021103981A1 (zh) 分屏显示的处理方法、装置及电子设备
WO2020052529A1 (zh) 全屏显示视频中快速调出小窗口的方法、图形用户接口及终端
WO2021139768A1 (zh) 跨设备任务处理的交互方法、电子设备及存储介质
WO2021000839A1 (zh) 一种分屏方法及电子设备
WO2021036571A1 (zh) 一种桌面的编辑方法及电子设备
WO2021000881A1 (zh) 一种分屏方法及电子设备
WO2021082835A1 (zh) 启动功能的方法及电子设备
CN113542503B (zh) 一种创建应用快捷方式的方法、电子设备及系统
WO2021063098A1 (zh) 一种触摸屏的响应方法及电子设备
WO2022068483A1 (zh) 应用启动方法、装置和电子设备
WO2020238759A1 (zh) 一种界面显示方法和电子设备
WO2022017393A1 (zh) 显示交互系统、显示方法及设备
WO2020155875A1 (zh) 电子设备的显示方法、图形用户界面及电子设备
CN113986070B (zh) 一种应用卡片的快速查看方法及电子设备
WO2022057852A1 (zh) 一种多应用程序之间的交互方法
WO2022042326A1 (zh) 显示控制的方法及相关装置
WO2022057889A1 (zh) 一种对应用程序的界面进行翻译的方法及相关设备
WO2022063159A1 (zh) 一种文件传输的方法及相关设备
WO2021190524A1 (zh) 截屏处理的方法、图形用户接口及终端
US20230236714A1 (en) Cross-Device Desktop Management Method, First Electronic Device, and Second Electronic Device
WO2022002213A1 (zh) 翻译结果显示方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20861361

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022514716

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020861361

Country of ref document: EP

Effective date: 20220331