WO2020063091A1 - Procédé de traitement d'image et dispositif terminal - Google Patents

Procédé de traitement d'image et dispositif terminal Download PDF

Info

Publication number
WO2020063091A1
WO2020063091A1 PCT/CN2019/098264 CN2019098264W WO2020063091A1 WO 2020063091 A1 WO2020063091 A1 WO 2020063091A1 CN 2019098264 W CN2019098264 W CN 2019098264W WO 2020063091 A1 WO2020063091 A1 WO 2020063091A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
input
terminal device
screenshot
picture
Prior art date
Application number
PCT/CN2019/098264
Other languages
English (en)
Chinese (zh)
Inventor
孟秋月
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2020063091A1 publication Critical patent/WO2020063091A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Definitions

  • Embodiments of the present invention relate to the field of communications technologies, and in particular, to a picture processing method and a terminal device.
  • a user can trigger a terminal device to take a screenshot of the terminal device's display screen to obtain a screenshot image containing the content displayed on the display screen.
  • the user can trigger the terminal device to process the screen shot image through various sub-functions in the screen capture function to obtain the processed picture; for example, the user can trigger the terminal device to perform the graffiti processing on the screen shot image through the graffiti sub function in the screen capture function , Or the user can trigger the terminal device to trim the screenshot picture by using the trim sub-function in the screenshot function.
  • the trimming sub-function the user can trigger the terminal device to trim a region from the screenshot picture through the trimming sub-function to obtain a picture containing the region.
  • the pictures obtained by the trimming sub-function in the screenshot function may include areas that the user does not want to keep. If these areas are to be trimmed, the picture processing software in the terminal device can be used to The picture is reprocessed to trim out these areas. This results in tedious and complicated processing of screenshots.
  • Embodiments of the present invention provide a picture processing method and a terminal device, so as to solve the problem of tedious and complicated processing of the existing screen shot pictures.
  • the present invention is implemented as follows:
  • an embodiment of the present invention provides a picture processing method, which is applied to a terminal device.
  • the method includes: receiving a first input from a user on the screen shot interface when the screen shot interface of the terminal device displays a target screen shot picture. And in response to the first input, determining N target regions in the target screenshot, and generating a target image according to the content of the N target regions, where N is an integer greater than or equal to 2.
  • an embodiment of the present invention provides a terminal device.
  • the terminal device includes a receiving module and a processing module.
  • the receiving module is configured to receive a first input of the user on the screen capture interface when the screen capture interface of the terminal device displays the target screen capture image;
  • the processing module is configured to determine the target screen capture in response to the first input received by the receiving module.
  • N target regions in the picture and generate a target picture according to the content of the N target regions, where N may be an integer greater than or equal to 2.
  • an embodiment of the present invention provides a terminal device.
  • the terminal device includes a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the computer program is executed by the processor, the first embodiment is implemented. Steps of the picture processing method in one aspect.
  • an embodiment of the present invention provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the steps of the picture processing method in the first aspect are implemented.
  • N in the target screenshot image is determined in response to the first input.
  • Target areas and generate a target picture according to the content of the N target areas (N may be an integer greater than or equal to 2).
  • Target areas namely, the N target areas described above
  • target pictures according to the contents of the at least two target areas (ie, the content required by the user), so it can be directly extracted from the target screenshot pictures displayed on the screenshot interface
  • the content that meets the user's needs is produced, that is, the content in the target picture that is finally generated is all that meets the user's needs.
  • the user only needs to input the picture displayed during the screen capture process to trigger the terminal device to process the picture displayed during the screen capture process to meet user needs, and the terminal The device can synthesize the processed one or more pictures to obtain a screenshot picture that meets the user's needs, without having to process the screenshot picture by other picture processing software, thereby simplifying the process of processing the screenshot picture.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a picture processing method according to an embodiment of the present invention.
  • FIG. 3 is one of the interface schematic diagrams of an application of a picture processing method according to an embodiment of the present invention.
  • FIG. 4 is a second schematic interface diagram of an application of a picture processing method according to an embodiment of the present invention.
  • FIG. 5 is a third schematic interface diagram of an application of a picture processing method according to an embodiment of the present invention.
  • FIG. 6 is a second schematic diagram of a picture processing method according to an embodiment of the present invention.
  • FIG. 7 is a third schematic diagram of a picture processing method according to an embodiment of the present invention.
  • FIG. 8 is a fourth schematic interface diagram of an application of a picture processing method according to an embodiment of the present invention.
  • FIG. 9 is a fifth schematic interface diagram of an application of a picture processing method according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
  • FIG. 11 is a second schematic structural diagram of a terminal device according to an embodiment of the present invention.
  • FIG. 12 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
  • first and second in the specification and claims of this application are used to distinguish different objects, rather than to describe a specific order of the objects.
  • first input, the second input, etc. are used to distinguish different inputs, not to describe a specific order of the inputs.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design described as “exemplary” or “for example” in the embodiments of the present invention should not be construed as more preferred or more advantageous than other embodiments or designs. Rather, the use of the words “exemplary” or “for example” is intended to present the relevant concept in a concrete manner.
  • a plurality refers to two or more than two, for example, a plurality of processing units refers to two or more processing units and the like.
  • An embodiment of the present invention provides a picture processing method and a terminal device.
  • a target screen shot image is displayed on a screen shot interface of the terminal device
  • a user's first input on the screen shot interface is received, and in response to the first input, a determination is made.
  • the N target regions in the target screenshot are generated according to the content of the N target regions (N may be an integer greater than or equal to 2).
  • Target areas namely, the N target areas described above
  • target pictures according to the contents of the at least two target areas (ie, the content required by the user), so it can be directly extracted from the target screenshot pictures displayed on the screenshot interface
  • the content that meets the user's needs is produced, that is, the content in the target picture that is finally generated is all that meets the user's needs.
  • the user only needs to input the picture displayed during the screen capture process to trigger the terminal device to process the picture displayed during the screen capture process to meet user needs, and the terminal The device can synthesize the processed one or more pictures to obtain a screenshot picture that meets the user's needs, without having to process the screenshot picture by other picture processing software, thereby simplifying the process of processing the screenshot picture.
  • the terminal device in the embodiment of the present invention may be a terminal device having an operating system.
  • the operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiment of the present invention.
  • an Android operating system is taken as an example to introduce a software environment applied to a picture processing method provided by an embodiment of the present invention.
  • FIG. 1 it is a schematic architecture diagram of a possible Android operating system provided by an embodiment of the present invention.
  • the architecture of the Android operating system includes 4 layers, respectively: an application program layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, it can be a Linux kernel layer).
  • the application layer includes various applications (including system applications and third-party applications) in the Android operating system.
  • the application framework layer is the framework of the application, and developers can develop some applications based on the application framework layer while complying with the development principles of the application framework.
  • the system runtime library layer includes a library (also called a system library) and an Android operating system operating environment.
  • the library mainly provides various resources needed by the Android operating system.
  • the Android operating system operating environment is used to provide a software environment for the Android operating system.
  • the kernel layer is the operating system layer of the Android operating system and belongs to the lowest layer of the Android operating system software layer.
  • the kernel layer is based on the Linux kernel and provides core system services and hardware-related drivers for the Android operating system.
  • a developer can develop a software program that implements the picture processing method provided by the embodiment of the present invention based on the system architecture of the Android operating system shown in FIG. 1, so that the picture The processing method can be based on the Android operating system shown in FIG. 1. That is, the processor or the terminal device can implement the picture processing method provided by the embodiment of the present invention by running the software program in an Android operating system.
  • the terminal device in this embodiment of the present invention may be a mobile terminal device or a non-mobile terminal device.
  • the mobile terminal device may be a mobile phone, a tablet computer, a laptop computer, a handheld computer, a car terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant, PDA, etc.
  • the non-mobile terminal device may be a personal computer (PC), a television (TV), a teller machine or a self-service machine, etc., which are not specifically limited in the embodiment of the present invention.
  • the execution subject of the picture processing method provided by the embodiment of the present invention may be the foregoing terminal device, or may be a functional module and / or a functional entity in the terminal device that can implement the picture processing method, which may be specifically determined according to actual use requirements.
  • the embodiment of the present invention is not limited. The following uses a terminal device as an example to describe a picture processing method provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a picture processing method, and the picture processing method may include the following S200-S201.
  • the terminal device receives a first input from the user on the screenshot interface.
  • the user can input on the terminal device when the terminal device is in an unlocked state (for example, press the power button and the volume down button on the terminal device at the same time, or click on the terminal device "Screen capture” controls, etc.) to trigger the terminal device to take a screenshot of the terminal device's display.
  • an unlocked state for example, press the power button and the volume down button on the terminal device at the same time, or click on the terminal device "Screen capture" controls, etc.
  • the user may input a target screenshot picture displayed in the screenshot interface of the terminal device to trigger the terminal device to process the picture displayed in the screenshot interface to meet user requirements.
  • the above screen capture interface may be an interface during a long screen capture, a screen capture editing interface after a long screen capture, or a screen capture editing interface after a short screen capture, which may be specifically based on actual use requirements. It is determined that the embodiment of the present invention is not limited.
  • the above target screenshot picture may be a picture displayed during a long screenshot process, and the picture may be used to synthesize a long screenshot picture.
  • the terminal device can display the interface during the long screenshot process (that is, Screen capture interface described above), and pictures during the long screen capture process are displayed on this interface.
  • the screen capture interface may include an "exit” control, a "next screen” control, a “save” control, and the like.
  • the user can input the "Exit” control to trigger the terminal device to exit the screen capture interface; the user can also input to the "Next Screen” control to trigger the terminal device to take a screen shot of this screen image displayed during the long screen capture process to get this Screenshot of the screen, and the next screen picture in the long screen shot process is displayed on the interface; the user can also input the "Save” control to trigger the terminal device to synthesize one or more of the screen shot pictures obtained into a long screen picture, And save the long screenshot.
  • the screen capture interface may further include an "Undo” control, a “Resume” control, and the like (as shown by various icons in the top area 31 in FIG. 3). Furthermore, the user can input pictures displayed during the long screen capture process and input the above-mentioned controls to trigger the terminal device to process the pictures displayed during the long screen capture process to meet user requirements.
  • the above target screenshot picture may be a screenshot picture obtained by taking a short screenshot of the display screen of the terminal device (hereinafter referred to as a short screenshot picture), or a screenshot picture obtained by taking a long screenshot of the display screen by the terminal device. (Hereafter referred to as a long screenshot).
  • the short screenshot picture may include content obtained by taking a screenshot of the terminal device's display when the display screen displays an interface
  • the long screenshot picture may include content obtained by taking a screenshot of the terminal device's display when the display screen continuously displays multiple interfaces. .
  • the terminal device may display a screenshot editing interface on the display screen in response to the user's input, and display the screenshot picture (for example, the above target screenshot picture) on the screenshot editing interface.
  • the user can trigger the terminal device to perform trimming processing on the screen shot image through the trimming sub-function by inputting the "trim” control.
  • the user can trigger the terminal device to trim out multiple areas (or content in multiple areas) in the screenshot picture that the user does not need to retain through the trimming sub-function to obtain the target picture.
  • trim and splice multiple areas (or content within multiple areas) that the user needs to keep in the screenshot to get the target picture.
  • it can be determined according to actual use requirements, and the embodiment of the present invention is not limited.
  • the user can trigger the terminal device to take a screenshot of the terminal device's display screen and click the "long screenshot" control (or “scroll screenshot” control) in the screenshot function.
  • the terminal device may obtain a long screenshot picture (that is, the above target screenshot picture) in response to the user's input.
  • the user can click the "Edit” control in the screen capture function.
  • the terminal device can respond to the user's input to display the screen capture editing interface on the screen of the terminal device, and display the long screen on the screen capture editing interface.
  • a screenshot is taken, and the user can process the long screenshot in the screenshot editing interface.
  • FIG. 4 shows a schematic diagram of displaying a target screen shot image on a screen shot editing interface of a terminal device in a short screen shot form.
  • the terminal device may respond to the input and display a short screenshot picture and a screenshot function on the display screen.
  • the first sub-function controls such as the "share” control, the "edit” control, and the "long screen” control shown in (a) of FIG. 4.
  • the content in the short screenshot image is content obtained by taking a screenshot of the display screen of the terminal device when the display screen displays an interface.
  • the terminal device may respond to the user's input to display a screen-editing interface on the display screen, and display the short-screen image (that is, the above target screen-image) on the screen-editing interface.
  • the terminal device can also display various picture processing controls on the screen editing interface, such as the “trim” control, the “doodle” control, and the “mosaic” control shown in (b) of FIG. 4.
  • the terminal device may also display a second sub-function control of the screen capture function on the screen-editing interface, for example, in FIG.
  • the second sub-function control shown in the top area 41 of the display screen includes “ “Exit” control, "Undo” control, “Restore” control, “Share” control, “Save” control, etc. (as shown by each icon in the top area 41 in (b) in Fig. 4).
  • FIG. 5 shows a schematic diagram of displaying a target screenshot picture on a screenshot editing interface of a terminal device in a long screenshot form.
  • the terminal device may respond to the input and display a long screen shot image and a screen capture function.
  • the first sub-function control is, for example, a "share” control, an "edit” control, and a "delete” control shown in (a) of FIG. 5.
  • the content in the long screenshot may be content obtained by the terminal device taking a screenshot of the display screen when the display screen continuously displays multiple interfaces.
  • the terminal device may display a screen capture editing interface on the display screen and display the long screen capture image (that is, the above target screen capture image) on the screen capture editing interface in response to the user's input.
  • the terminal device can also display various picture processing controls on the screen editing interface, such as the “trim” control, “doodle” control, and “mosaic” control shown in (b) of FIG. 5.
  • the terminal device may also display a second sub-function control of the screen capture function on the screen-editing interface, for example, in FIG.
  • the second sub-function control shown in the top area 51 of the display screen includes " "Exit” control, "Undo” control, "Restore” control, "Share” control, “Save” control, etc. (as shown by each icon in the top area 51 in (b) in Fig. 5).
  • the first input of the user on the screen capture interface may be a tap input (such as a single-click input or a double-click input), or a long-press input (such as pressing a preset time on the screen capture interface). ), Can also be drag input, or any other possible input, which can be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
  • the terminal device determines N target areas in the target screen shot picture; and generates a target picture according to the content of the N target areas.
  • N may be an integer greater than or equal to two.
  • M selection boxes may be displayed by default on the screenshot interface, where M may be a positive integer.
  • the M selection boxes can be used by the user to select the content of a region in the target screenshot.
  • the size and shape of the above M selection boxes and their positions on the screen capture interface can be set by the system by default or customized by the user as needed. Specifically, it can be determined according to actual use requirements.
  • This embodiment of the present invention does not make limited. It can be understood that the user may trigger the terminal device to drag the selection box to the target position by dragging the input to at least one of the M selection boxes, or may use the zoom input to at least one of the M selection boxes to Trigger the terminal device to scale the marquee to the target size.
  • the first input may include any one of (1) to (6) listed below: (1) drag input; (2) zoom input; (3) drag input And zoom input; (4) drag input and save input (for example, click the "Save” control); (5) zoom input and save input; (6) drag input, zoom input, and save input.
  • the terminal device when the target screenshot picture is displayed on the screenshot interface of the terminal device, the terminal device can receive the user's input (ie, the first input) to the selection box on the screenshot interface, In the case where the first input is not received within a preset time period (for example, 5 seconds), after the preset time period, the terminal device may respond to the first input to determine the N target areas in the target screenshot image, and according to the The content of the N target regions generates a target picture.
  • a preset time period for example, 5 seconds
  • the terminal device may respond to the user's input to the selection box on the screenshot interface and the "Save" control ( (Ie, the first input), determine N target regions in the target screenshot, and generate a target picture according to the content of the N target regions.
  • the first input may also include any other possible forms of input, such as the “undo” of the screen capture function.
  • the input of a sub-function control such as a control or a "restore" control may be specifically determined according to actual use requirements, and is not limited in the embodiment of the present invention.
  • the user can trigger the terminal device to determine at least two target areas (that is, the N target areas described above) in the target screenshot by using the input (that is, the first input described above), and according to the at least two targets,
  • the content of the area that is, the content required by the user
  • the target picture displayed on the screen capture interface, that is, the content of the target picture that is finally generated is Content that meets user needs.
  • a target screenshot picture is displayed on a screenshot interface of a terminal device
  • a user's first input on the screenshot interface is received, and the target screenshot picture is determined in response to the first input.
  • N target regions are generated according to the content of the N target regions (N may be an integer greater than or equal to 2).
  • Target areas namely, the N target areas described above
  • target pictures according to the contents of the at least two target areas (ie, the content required by the user), so it can be directly extracted from the target screenshot pictures displayed on the screenshot interface
  • the content that meets the user's needs is produced, that is, the content in the target picture that is finally generated is all that meets the user's needs.
  • the user only needs to input the picture displayed during the screen capture process to trigger the terminal device to process the picture displayed during the screen capture process to meet user needs, and the terminal The device can synthesize the processed one or more pictures to obtain a screenshot picture that meets the user's needs, without having to process the screenshot picture by other picture processing software, thereby simplifying the process of processing the screenshot picture.
  • the terminal device may also display the M selection boxes on the screen capture interface in response to the user's second input on the screen capture interface. Specifically, the terminal device may respond to the user's second input on the screen capture interface, and display M selection boxes (that is, at least two selection boxes) in the area corresponding to the second input, and then the terminal device may respond to the user on the screen capture interface.
  • the first input on the N target area is used to determine N target areas (that is, at least two target areas) in the target screenshot, so that the terminal device can generate a target picture according to the content of the N target areas.
  • the image processing method provided by the embodiment of the present invention may further include S202 and S203 described below.
  • the terminal device receives a second input from the user on the screenshot interface.
  • the user when the screen shot interface of the terminal device displays the target screen shot picture, the user may trigger the terminal device to display at least two check boxes in the area corresponding to the second input through the second input, and the check box may be used for The user selects the area in the target screenshot.
  • the terminal device In response to the second input, the terminal device displays M selection boxes in a region corresponding to the second input.
  • Each of the M selection boxes can correspond to a first area in the target screenshot, and each selection box can be used to select the content of a first area in the target screenshot.
  • M Can be a positive integer.
  • the first input received by the terminal device in the foregoing S200 may be used to trigger the terminal device to determine the N target areas through the M selection boxes.
  • the above selection frame may be superimposed and displayed on the target screenshot.
  • the selection frame may be superimposed and displayed on the target screenshot with a first transparency.
  • the first transparency is described as T1
  • the value range of T1 may be 0% ⁇ T1 ⁇ 100%.
  • the shape of the selection frame may be any possible shape such as rectangle, circle, triangle, diamond, or polygon, and may be specifically determined according to actual use requirements, which is not limited in the embodiment of the present invention.
  • the number of the above selection frames may be two, or three or more, which may be specifically determined according to actual use requirements, which are not limited in the embodiment of the present invention.
  • the second input may include M sub-inputs of the user on the screen capture interface (for example, sub-input 1, sub-input 2, ... sub-input M), and accordingly, the terminal device responds to the M For each sub-input, a selection box can be displayed in the area corresponding to each of the M sub-inputs.
  • the second input of the user on the screen capture interface may be a tap input (such as a single-click input or a double-click input), or a long-press input (such as pressing a preset time on the screen capture interface).
  • the preset time may be different from the preset time corresponding to the first input described in the above embodiment when the long-press input is used), or it may be a sliding input, or any other possible form of input, which can be specifically based on actual use
  • the requirements are determined, and the embodiments of the present invention are not limited.
  • the terminal device when the target screenshot picture is displayed on the screenshot interface of the terminal device, the terminal device receives a sub-input 1 (for example, After the user clicks an area in the target screenshot), the terminal device can respond to the sub-input 1 and display a marquee 1 in the area corresponding to the sub-input 1. The terminal device receives the user's sub-input 2 on the screenshot interface.
  • a sub-input 1 for example, After the user clicks an area in the target screenshot
  • the terminal device can respond to the sub-input 1 and display a marquee 1 in the area corresponding to the sub-input 1.
  • the terminal device receives the user's sub-input 2 on the screenshot interface.
  • the terminal device can respond to the sub-input 2 and display the selection box 2 in the area corresponding to the sub-input 2; in this way, the terminal device receives the user's After the sub-input M on the screenshot interface (for example, the user can click another area in the target screenshot picture), the terminal device may respond to the sub-input M and display a selection box M in the area corresponding to the sub-input M. Therefore, the terminal device may display M selection boxes in a region corresponding to the second input in response to the second input (including the M sub-inputs) on the screen capture interface of the user.
  • the terminal device may respond to the user's sliding input in different directions on the screen capture interface, and the second input corresponds to the second input.
  • the area displays different types of marquees with different effects.
  • the terminal device may respond to a user's rightward swipe input on the screen capture interface, and display a delete category check box in the area corresponding to the second input, which indicates deletion of the area and the content of the area, or the terminal device may respond to The user swipes an input to the left on the screen capture interface, and displays a reserved selection box indicating that the area and the content in the area are retained in the area corresponding to the second input.
  • it can be determined according to actual use requirements, and the embodiment of the present invention is not limited.
  • the user may trigger the terminal device to display different types of selection boxes in the selection box already displayed on the screen capture interface.
  • the terminal device may respond to the user's input of the reserved class selection box displayed on the screen capture interface.
  • the delete class marquee is displayed.
  • the terminal device may merge multiple selection boxes of the same type, of which the same type The overlapping areas of multiple selection boxes are merged into one area.
  • the terminal device may respond to the multiple sub-inputs of the user on the screen capture interface, and respectively display a selection box in the area corresponding to the multiple sub-inputs, which is convenient for the user to process the target screenshot picture on the screen capture interface. And because the user can process at least two areas in the target screenshot picture, based on being able to simplify the process of processing the screenshot picture, it improves the user's flexibility in using the terminal device to process the screenshot picture.
  • the above S203 may be specifically implemented by the following S203a.
  • the terminal device In response to the second input, the terminal device displays M selection boxes in a region corresponding to the second input in the first display state.
  • the terminal device may control both the selection frame and the selection frame to be displayed in the first display state.
  • the display state inside the selection box is the same as the display state outside the selection box, the user can determine the position of the selection box according to the boundary line of the selection box.
  • the terminal device may control the selection box to display in a first display state and the selection box to display in a second display state, wherein the first display state is different from the second display state.
  • the display state (that is, the first display state) in the M selection frames and the display state (that is, the second display state) of other regions except the M selection frames in the target screenshot may be different.
  • the first display state and the second display state may be represented by different color characteristics.
  • the first display state and the second display state may be distinguished and represented by at least one of color characteristics such as chroma, brightness, and saturation.
  • the first display state and the second display state are represented by brightness characteristics as an example (that is, the first display state indicates the first brightness and the second display state indicates the second brightness), then optionally, the first One brightness may be greater than the second brightness, or the above-mentioned first brightness may be smaller than the second brightness.
  • the inside of the selection box may be displayed at a default brightness
  • the outside of the selection box may be displayed at a brightness lower than the default brightness (for example, gray display); or, outside the selection box, it may be displayed at a default brightness.
  • the selection box can be displayed with a brightness higher than the default brightness (such as highlighting).
  • the inside of the selection box may be displayed at the default brightness, and the outside of the selection box may be displayed at a brightness higher than the default brightness (for example, highlighting); or the outside of the selection box may be displayed at the default brightness, the selection box Can be displayed at a brightness lower than the default brightness (for example, grayed out).
  • the default brightness for example, highlighting
  • the selection box Can be displayed at a brightness lower than the default brightness (for example, grayed out).
  • the terminal device may respond to the user's rightward swipe input on the target screenshot picture, and delete the category selection box with low brightness display (eg, gray display) in the area corresponding to the swipe right input.
  • low brightness display eg, gray display
  • the selection box is a deletion class selection box displayed after the user swipes right on the target screenshot picture, and the deletion class selection box is displayed with default brightness.
  • the terminal device may respond to the user's swipe left input on the target screenshot picture, and retain the category selection box in a high brightness display (eg, highlight display) in the area corresponding to the swipe left input.
  • a high brightness display eg, highlight display
  • the selection box is a reserved selection box displayed after the user swipes left on the target screenshot picture, and the reserved selection box is displayed with default brightness.
  • the reserved selection box Display at a brightness lower than the default brightness (for example, grayed out).
  • the delete category selection box is grayed out on the screen capture interface, and when the user swipes left to input on the screen capture interface, The screen capture interface highlights the reserved class selection box as an example for illustration.
  • the inside and outside of the selection box can be controlled to be displayed in different display states, which is convenient for the user to identify the selected multiple areas when multiple areas are selected on the target screen shot image, thereby making it easier for the user to
  • the target screenshot is processed to improve the convenience of human-computer interaction.
  • the terminal device may control the selection frame to move on the target screenshot image in response to the user's dragging input to the selection frame, that is, the user may place the target screenshot image on the target screenshot image according to personal requirements.
  • the marquee is dragged in all directions.
  • the user can drag the selection box up or down, or drag the selection box left or right, or drag the selection box clockwise or counterclockwise.
  • clockwise direction, counterclockwise direction, up, down, left, and right are all based on the operation of the user on the display screen of the terminal device relative to the display screen of the terminal device or the terminal device.
  • the initial value of the size of the selection box may be set by the system by default, or may be customized by the user according to needs, and may be specifically determined according to actual use requirements, which is not limited in the embodiment of the present invention.
  • the selection box is a rectangular selection box as an example, and the initial values of the length and width of the rectangular selection box may be length 1 and width 1, respectively, or taking the selection box as a circular selection box as an example, the circular selection box
  • the initial value of the radius of the box may be radius 1.
  • the terminal device may respond to the user's rightward sliding input (ie, the second input) of the target screenshot image, and display a rectangular deletion class selection box of a default size in a region corresponding to the second input, as shown in graying in FIG. 8 As shown in the rectangular area.
  • the initial value of the size of the selection box can be adjusted (eg, reduced) adaptively.
  • the terminal device may control the selection box to be reduced or enlarged in response to a user's input to the selection box (for example, it may be a boundary line of the selection box).
  • the selection box is still a rectangular selection box as an example.
  • the user can drag any one of the four boundary lines of the rectangular selection box to trigger the terminal device to move the boundary line in the rectangular selection box. Reduction or enlargement of rectangular selection box.
  • the user can drag the upper border line of the rectangular selection box upward to trigger the terminal device to move the upper border line of the rectangular selection box upward, and then the rectangular selection box can be enlarged.
  • the user can also simultaneously drag or sequentially drag at least two of the four boundary lines of the rectangular selection box in different directions to trigger the terminal device to move the at least two boundary lines of the rectangular selection box, and further You can reduce or enlarge the rectangular marquee.
  • the user can drag the upper border line of the rectangular marquee box upward and drag the lower border line of the rectangular marquee box downward to trigger the terminal device to move the upper border line of the rectangular marquee box upward and the lower border line downward, and thus can Enlarge the rectangular marquee.
  • the above-mentioned second input and determining the N target areas in the target screen shot picture may be implemented in different ways.
  • the following takes three possible implementations (for example, the first possible implementation manner, the second possible implementation manner, and the third possible implementation manner described below) as examples, combining different forms of the second input and different
  • the picture processing method provided by the embodiment of the present invention is described exemplarily.
  • the above-mentioned M selection boxes may be reserved class selection boxes.
  • the N target regions may be M first regions corresponding to the M selection frames in the target screenshot.
  • the user can swipe left on the screen capture interface to select the M first regions (ie, the second input) in the target screen capture image, and the terminal device can respond to the second input and enter the second input in the second input.
  • Corresponding regions ie, M first regions
  • the M reserved selection boxes that is, M first regions
  • the terminal device can determine the M first regions (including the content that the user needs to keep) selected by the user as N target regions, and can generate the content including the content that the user needs to retain based on the content of the M first regions. Target picture.
  • the user can swipe left on the target screenshot picture, and the terminal device can respond to the user's swipe left input to display two retention class selection boxes in two areas (as shown in FIG. 8).
  • the highlighted rectangular area in (8) indicates that the content of the area needs to be retained).
  • the two reserved selection boxes correspond to two first regions.
  • the terminal device may determine the two first areas in the target screenshot picture as two target areas, and generate a target picture according to the content of the two target areas.
  • the above-mentioned M selection frames may be deletion-type selection frames.
  • the N target areas may be K second areas in the above target screenshot, and the K second areas The area is an area other than the M first areas corresponding to the M selection frames, and K may be an integer greater than or equal to 2.
  • the user can swipe input on the screen capture interface to the right to select the M first regions (ie, the second input) in the target screen capture image, and the terminal device can respond to the second input and input the second input in the second input.
  • Corresponding areas ie, M first areas
  • the areas in the picture other than the M first areas corresponding to the M selection frames ie, the K second areas
  • the terminal device may determine the K second regions (including the content that the user needs to keep) other than the M first regions (including the content that the user does not need to keep) selected as the N target regions, and may The content of the K second regions generates a target picture including content that the user needs to keep.
  • M selection boxes are all deletion-type selection boxes (for example, the user swipes the input on the target screenshot picture to the right) as an example.
  • the user can swipe right on the target screenshot picture, and the terminal device can respond to the user's swipe right input to display three delete category selection boxes in three areas (as shown in FIG. 8).
  • the gray rectangular area in (a) indicates that the content of the area can be trimmed off).
  • the area other than the above-mentioned three deletion-type selection boxes includes three areas (that is, three second areas).
  • the terminal device may determine the above three second areas in the target screenshot picture as three target areas, and generate a target picture according to the content of the three target areas.
  • the foregoing second input may include a first sub-input and a second sub-input.
  • the above M selection boxes may include K reserved class selection boxes corresponding to the first sub-input and M-K deletion class selection boxes corresponding to the second sub-input.
  • the class selection box can be nested and deleted in the reserved class selection box, and the class selection box can be nested in the deleted class selection box.
  • the above N target areas can be the target screenshots, except for the areas corresponding to the K retention class selection boxes, except for the areas corresponding to the MK deletion class selection boxes. Outside area. Further, the terminal device may generate a target picture according to the contents of the N target areas.
  • the terminal device may display K reserved selection boxes in response to the user's leftward sliding input on the target screenshot picture.
  • the terminal device may also display M-K deletion class selection boxes in response to the user's rightward sliding input in the K retention class selection boxes. In this way, the terminal device can display the deletion class selection box nested within the reserved class selection box.
  • the above N target regions may be the first target region and the second target region in the target screenshot, and the first target region may be the division class with MK deletion classes.
  • the area other than the area corresponding to the selection frame, the second target area may be an area corresponding to the K retention type selection frames among the areas corresponding to the MK deletion type selection frames, and M may be an integer greater than or equal to 2.
  • the terminal device may generate a target picture according to the contents of the N target areas.
  • the terminal device may display M-K deletion category selection boxes in response to a user's rightward swipe input on the target screenshot picture.
  • the terminal device may also display K retention class selection boxes in response to a user's leftward sliding input in the M-K deletion class selection boxes. In this way, the terminal device can display the reserved class selection box nested within the deleted class selection box.
  • the terminal device can determine the target screenshot area including the content that the user needs to keep as the target area, and the terminal device can generate the target picture including the content that the user needs to keep according to the content of the target area, thereby simplifying the processing of the screenshot image. Based on this process, users can further enhance the flexibility of using the terminal device to handle screenshots.
  • the step of the terminal device generating a target picture according to the contents of the N target areas may be specifically implemented by the following S201c and S201d.
  • S201c The terminal device generates N sub-pictures according to the contents of the N target areas.
  • S201d The terminal device stitches the N sub-pictures to obtain a target picture.
  • the terminal device may generate N sub-pictures correspondingly according to the content of the determined N target regions, and sequentially stitch the N sub-pictures (for example, using a stitching technique) to obtain the target picture.
  • the terminal device may display three deletion-type selection boxes (such as the gray rectangular area in (a) of FIG. 9) in response to the user's rightward sliding input.
  • the area other than the above-mentioned three deletion-type selection boxes includes three areas (that is, three second areas).
  • the terminal device may determine the above three second areas as the three target areas in the target screenshot picture, and may generate three sub-pictures based on the content of the three target areas, and the terminal device may stitch the three sub-pictures, A target picture as shown in (b) in FIG. 9 is obtained.
  • the terminal device may stitch multiple sub-pictures generated according to the content of multiple target areas to obtain the target picture, thereby further improving the flexibility of the user in processing the screenshot picture using the terminal device.
  • an embodiment of the present invention provides a terminal device 300.
  • the terminal device 300 may include a receiving module 301 and a processing module 302.
  • the receiving module 301 is configured to receive a first input of a user on the screen capture interface when the screen capture interface of the terminal device 300 displays a target screen shot image;
  • the processing module 302 is configured to respond to the first input received by the receiving module 301, Determine N target regions in the target screenshot, and generate a target image according to the content of the N target regions.
  • N may be an integer greater than or equal to 2.
  • the terminal device 300 provided by the embodiment of the present invention may further include a display module 303.
  • the receiving module 301 is further configured to receive a second input of the user on the screen capture interface before receiving the first input of the user;
  • the display module 303 is configured to respond to the second input received by the receiving module 301 and
  • the corresponding area displays M selection boxes, where each selection box can correspond to a first area in the target screenshot, and each selection box can be used to select a first area in the target screenshot.
  • M can be a positive integer.
  • the first input may be used to trigger the terminal device 300 to determine the N target areas through the M selection boxes.
  • all of the M selection boxes described above may be reserved selection boxes.
  • the N target areas may be M first areas corresponding to the M selection frames in the target screenshot.
  • the above-mentioned M selection boxes may all be delete-type selection boxes.
  • the N target regions may be K second regions in the target screenshot, and the K second regions are regions other than the M first regions corresponding to the M selection frames.
  • the second input may include a first sub-input and a second sub-input.
  • the M selection boxes may include K reserved selection boxes corresponding to the first sub-input and M-K deleted selection boxes corresponding to the second sub-input.
  • the above N target regions may be regions other than the regions corresponding to the M-K deletion class selection boxes in the region corresponding to the K retention class selection boxes in the target screenshot.
  • the N target areas may be the first target area and the second target area in the target screenshot, and the first target area may be an area other than the area corresponding to the MK deletion class selection frames.
  • the target area may be an area corresponding to the K retained selection boxes among the areas corresponding to the MK deleted selection boxes, and M may be an integer greater than or equal to 2.
  • the processing module 302 is specifically configured to generate N sub-pictures according to the content of the N target areas, and stitch the N sub-pictures to obtain the target picture.
  • the display module 303 is specifically configured to display the M selection boxes in the first display state in the area corresponding to the second input, and the display state in the M selection boxes and the target screenshot are divided by the M The display status of other areas outside the marquee can be different.
  • the above target screenshot picture may be a picture displayed during a long screenshot process, and the picture may be used to synthesize a long screenshot picture.
  • the target screenshot picture may be a long screenshot picture.
  • the terminal device provided by the embodiment of the present invention can implement the processes implemented by the terminal device in the foregoing method embodiments. To avoid repetition, details are not described herein again.
  • the terminal device provided by the embodiment of the present invention can receive a user's first input on the screen capture interface when the target screen shot image is displayed on the screen capture interface of the terminal device, and the terminal device can determine the target in response to the first input.
  • N target regions in the screenshot, and a target image is generated according to the content of the N target regions N may be an integer greater than or equal to 2).
  • Target areas namely, the N target areas described above
  • Target pictures namely, the N target areas described above
  • the content that meets the user's needs is produced, that is, the content in the target picture that is finally generated is all that meets the user's needs.
  • the user only needs to input the picture displayed during the long screen capture process to trigger the terminal device to process the picture displayed during the long screen capture process to meet user requirements.
  • the terminal device can synthesize the processed one or more pictures to obtain a long screenshot picture that meets the user's needs, without processing the screenshot picture by other picture processing software, thereby simplifying the process of processing the screenshot picture.
  • FIG. 12 is a schematic diagram of a hardware structure of a terminal device implementing various embodiments of the present invention.
  • the terminal device 800 includes, but is not limited to, a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, and a memory 809 , Processor 810, and power supply 811.
  • a radio frequency unit 801 includes, but is not limited to, a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, and a memory 809 , Processor 810, and power supply 811.
  • the terminal device may include more or fewer components than shown in the figure, or some components may be combined, or different components. Layout.
  • the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm
  • the user input unit 807 is configured to receive a first input of the user on the screen capture interface when the target screen capture picture is displayed on the screen capture interface of the terminal device; and the processor 810 is configured to respond to the first input received by the user input unit 807.
  • N target regions in the target screenshot are determined, and a target image is generated according to the content of the N target regions.
  • N may be an integer greater than or equal to 2.
  • An embodiment of the present invention provides a terminal device.
  • the terminal device may receive a first input from a user on the screen capture interface when the screen capture interface of the terminal device displays a target screen capture picture, and the terminal device may respond to the first input. , Determine N target regions in the target screenshot, and generate a target image according to the content of the N target regions (N may be an integer greater than or equal to 2).
  • N may be an integer greater than or equal to 2.
  • Target areas namely, the N target areas described above
  • Target pictures namely, the N target areas described above
  • the content that meets the user's needs is produced, that is, the content in the target picture that is finally generated is all that meets the user's needs.
  • the user only needs to input the picture displayed during the screen capture process to trigger the terminal device to process the picture displayed during the screen capture process to meet user needs, and the terminal The device can synthesize the processed pictures to obtain screenshots that meet user needs, without having to process the screenshots through other image processing software, thereby simplifying the process of processing screenshots.
  • the radio frequency unit 801 may be used to receive and send signals during the process of receiving and sending information or during a call. Specifically, the downlink data from the base station is received and processed by the processor 810; The uplink data is sent to the base station.
  • the radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
  • the terminal device provides users with wireless broadband Internet access through the network module 802, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into audio signals and output them as sound. Moreover, the audio output unit 803 may also provide audio output (for example, call signal reception sound, message reception sound, etc.) related to a specific function performed by the terminal device 800.
  • the audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 804 is used to receive audio or video signals.
  • the input unit 804 may include a graphics processing unit (GPU) 8041 and a microphone 8042.
  • the graphics processor 8041 pairs images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frames may be displayed on a display unit 806.
  • the image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802.
  • the microphone 8042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 801 in the case of a telephone call mode.
  • the terminal device 800 further includes at least one sensor 805, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 8061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 8061 and the display panel 8061 when the terminal device 800 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when it is stationary, which can be used to identify the attitude of the terminal device (such as horizontal and vertical screen switching, related games , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc .; sensor 805 can also include fingerprint sensor, pressure sensor, iris sensor, molecular sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensors, etc. are not repeated here.
  • the display unit 806 is configured to display information input by the user or information provided to the user.
  • the display unit 806 may include a display panel 8061.
  • the display panel 8061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the user input unit 807 may be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the terminal device.
  • the user input unit 807 includes a touch panel 8071 and other input devices 8072.
  • Touch panel 8071 also known as touch screen, can collect user's touch operations on or near it (such as the user using a finger, stylus, etc. any suitable object or accessory on touch panel 8071 or near touch panel 8071 operating).
  • the touch panel 8071 may include a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal caused by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends it
  • the processor 810 receives and executes a command sent by the processor 810.
  • various types such as resistive, capacitive, infrared, and surface acoustic wave can be used to implement the touch panel 8071.
  • the user input unit 807 may further include other input devices 8072.
  • other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, and details are not described herein again.
  • the touch panel 8071 may be overlaid on the display panel 8061.
  • the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch panel 8071 transmits the touch operation to the processor 810 to determine the type of the touch event.
  • the type of event provides corresponding visual output on the display panel 8061.
  • the touch panel 8071 and the display panel 8061 are implemented as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated. The implementation of the input and output functions of the terminal device is not specifically limited here.
  • the interface unit 808 is an interface for connecting an external device with the terminal device 800.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, and audio input / output (I / O) port, video I / O port, headphone port, and more.
  • the interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal device 800 or may be used to connect the terminal device 800 and the external Transfer data between devices.
  • the memory 809 may be used to store software programs and various data.
  • the memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may store data according to Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 809 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 810 is a control center of the terminal device, and uses various interfaces and lines to connect various parts of the entire terminal device. By running or executing software programs and / or modules stored in the memory 809, and calling data stored in the memory 809, , To perform various functions of the terminal device and process data, so as to monitor the terminal device as a whole.
  • the processor 810 may include one or more processing units; optionally, the processor 810 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, etc.
  • the tuning processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 810.
  • the terminal device 800 may further include a power source 811 (such as a battery) for supplying power to various components.
  • a power source 811 such as a battery
  • the power source 811 may be logically connected to the processor 810 through a power management system, thereby implementing management of charging, discharging, and power consumption through the power management system. Management and other functions.
  • the terminal device 800 includes some functional modules that are not shown, and details are not described herein again.
  • an embodiment of the present invention further provides a terminal device, which includes a processor 810 and a memory 809 as shown in FIG. 12, and a computer program stored in the memory 809 and executable on the processor 810.
  • the computer When the program is executed by the processor 810, each process of the embodiment of the picture processing method is implemented, and the same technical effects can be achieved. To avoid repetition, details are not described herein again.
  • An embodiment of the present invention further provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the processes of the foregoing image processing method embodiments are implemented, and the same technology can be achieved. Effect, in order to avoid repetition, it will not be repeated here.
  • the computer-readable storage medium may include a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • the methods in the above embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, also by hardware, but in many cases the former is better.
  • Implementation Based on such an understanding, the technical solution of the present invention, in essence, or a part that contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium (such as ROM / RAM, magnetic disk, The optical disc) includes a plurality of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in the embodiments of the present invention.
  • a terminal device which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un procédé de traitement d'image et un dispositif terminal, qui se rapportent au domaine technique des communications, et qui permettent de remédier au côté fastidieux et complexe du traitement d'une image de capture d'écran dans un dispositif terminal existant. Le procédé comprend les étapes consistant à : lorsqu'une interface de capture d'écran d'un dispositif terminal affiche une image de capture d'écran cible, recevoir une première entrée d'un utilisateur sur l'interface de capture d'écran; et en réponse à la première entrée, déterminer N régions cibles dans l'image de capture d'écran cible, et générer une image cible en fonction du contenu des N régions cibles, N étant un nombre entier supérieur ou égal à 2. Le procédé peut être appliqué dans un scénario dans lequel une image de capture d'écran est traitée.
PCT/CN2019/098264 2018-09-27 2019-07-30 Procédé de traitement d'image et dispositif terminal WO2020063091A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811133596.7 2018-09-27
CN201811133596.7A CN109460177A (zh) 2018-09-27 2018-09-27 一种图片处理方法及终端设备

Publications (1)

Publication Number Publication Date
WO2020063091A1 true WO2020063091A1 (fr) 2020-04-02

Family

ID=65607078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098264 WO2020063091A1 (fr) 2018-09-27 2019-07-30 Procédé de traitement d'image et dispositif terminal

Country Status (2)

Country Link
CN (1) CN109460177A (fr)
WO (1) WO2020063091A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210117073A1 (en) * 2019-10-17 2021-04-22 Samsung Electronics Co., Ltd. Electronic device and method for operating screen capturing by electronic device
EP4155888A4 (fr) * 2020-05-19 2023-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Procédé de capture d'écran, terminal et support de stockage non volatil lisible par ordinateur

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460177A (zh) * 2018-09-27 2019-03-12 维沃移动通信有限公司 一种图片处理方法及终端设备
CN110232174A (zh) * 2019-04-22 2019-09-13 维沃移动通信有限公司 一种内容选中方法及终端设备
CN110209324B (zh) * 2019-04-30 2020-11-10 维沃移动通信有限公司 一种显示方法及终端设备
CN110209456A (zh) * 2019-05-31 2019-09-06 努比亚技术有限公司 屏幕界面长截图的方法、移动终端及计算机可读存储介质
CN110502293B (zh) * 2019-07-10 2022-02-01 维沃移动通信有限公司 一种截屏方法及终端设备
CN110750200A (zh) * 2019-09-30 2020-02-04 维沃移动通信有限公司 一种截屏图片的处理方法及终端设备
CN110908750B (zh) * 2019-10-28 2021-10-26 维沃移动通信有限公司 一种截屏方法及电子设备
CN111124231B (zh) * 2019-12-26 2021-02-12 维沃移动通信有限公司 图片生成方法及电子设备
CN111383175A (zh) * 2020-03-02 2020-07-07 维沃移动通信有限公司 图片获取方法及电子设备
CN111638844A (zh) * 2020-05-22 2020-09-08 维沃移动通信有限公司 截屏方法、装置及电子设备
CN112764624B (zh) * 2021-01-26 2022-09-09 维沃移动通信有限公司 息屏显示方法及装置
CN115048009A (zh) * 2021-02-26 2022-09-13 京东方科技集团股份有限公司 对话界面截取方法及装置、计算机设备及存储介质
CN113296661B (zh) * 2021-03-18 2023-10-27 维沃移动通信有限公司 图像处理方法、装置、电子设备及可读存储介质
CN113093960B (zh) * 2021-04-16 2022-08-02 南京维沃软件技术有限公司 图像编辑方法、编辑装置、电子设备和可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663791A (zh) * 2012-03-20 2012-09-12 上海量明科技发展有限公司 一种针对截图区域进行剪辑的方法及客户端
US8601172B2 (en) * 2011-05-19 2013-12-03 International Business Machines Corporation Recognition techniques to enhance automation in a computing environment
CN104571865A (zh) * 2015-01-06 2015-04-29 深圳市金立通信设备有限公司 一种终端
CN106502533A (zh) * 2016-10-21 2017-03-15 上海与德信息技术有限公司 一种截屏方法及装置
CN107678648A (zh) * 2017-09-27 2018-02-09 北京小米移动软件有限公司 截屏处理方法及装置
CN108037871A (zh) * 2017-11-07 2018-05-15 维沃移动通信有限公司 截屏方法及移动终端
CN109460177A (zh) * 2018-09-27 2019-03-12 维沃移动通信有限公司 一种图片处理方法及终端设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8555185B2 (en) * 2009-06-08 2013-10-08 Apple Inc. User interface for multiple display regions
CN106502524A (zh) * 2016-09-27 2017-03-15 乐视控股(北京)有限公司 截屏方法及装置
CN107678644B (zh) * 2017-09-18 2020-06-02 维沃移动通信有限公司 一种图像处理方法及移动终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8601172B2 (en) * 2011-05-19 2013-12-03 International Business Machines Corporation Recognition techniques to enhance automation in a computing environment
CN102663791A (zh) * 2012-03-20 2012-09-12 上海量明科技发展有限公司 一种针对截图区域进行剪辑的方法及客户端
CN104571865A (zh) * 2015-01-06 2015-04-29 深圳市金立通信设备有限公司 一种终端
CN106502533A (zh) * 2016-10-21 2017-03-15 上海与德信息技术有限公司 一种截屏方法及装置
CN107678648A (zh) * 2017-09-27 2018-02-09 北京小米移动软件有限公司 截屏处理方法及装置
CN108037871A (zh) * 2017-11-07 2018-05-15 维沃移动通信有限公司 截屏方法及移动终端
CN109460177A (zh) * 2018-09-27 2019-03-12 维沃移动通信有限公司 一种图片处理方法及终端设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210117073A1 (en) * 2019-10-17 2021-04-22 Samsung Electronics Co., Ltd. Electronic device and method for operating screen capturing by electronic device
US11842039B2 (en) * 2019-10-17 2023-12-12 Samsung Electronics Co., Ltd. Electronic device and method for operating screen capturing by electronic device
EP4155888A4 (fr) * 2020-05-19 2023-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Procédé de capture d'écran, terminal et support de stockage non volatil lisible par ordinateur

Also Published As

Publication number Publication date
CN109460177A (zh) 2019-03-12

Similar Documents

Publication Publication Date Title
WO2020063091A1 (fr) Procédé de traitement d'image et dispositif terminal
WO2021104365A1 (fr) Procédé de partage d'objets et dispositif électronique
CN111596845B (zh) 显示控制方法、装置及电子设备
CN111142730B (zh) 一种分屏显示方法及电子设备
WO2021083132A1 (fr) Procédé de déplacement d'icônes et dispositif électronique
WO2021136136A1 (fr) Procédé de capture d'écran et dispositif électronique
WO2020151460A1 (fr) Procédé de traitement d'objet et dispositif terminal
WO2021083087A1 (fr) Procédé de capture d'écran et dispositif terminal
WO2021129536A1 (fr) Procédé de déplacement d'icône et dispositif électronique
WO2021036531A1 (fr) Procédé de capture d'écran et équipement terminal
WO2021004327A1 (fr) Procédé de définition d'autorisation d'application, et dispositif terminal
WO2020151525A1 (fr) Procédé d'envoi de message et dispositif terminal
CN111078076A (zh) 一种应用程序切换方法及电子设备
WO2021104163A1 (fr) Procédé d'agencement d'icônes et dispositif électronique
WO2020173235A1 (fr) Procédé de commutation de tâches et dispositif terminal
WO2021012927A1 (fr) Procédé d'affichage d'icône et dispositif terminal
CN108920226B (zh) 屏幕录制方法及装置
WO2021057290A1 (fr) Procédé de commande d'informations et dispositif électronique
WO2020192297A1 (fr) Procédé de commutation d'interface d'écran et dispositif terminal
CN111026299A (zh) 信息分享方法及电子设备
WO2020181956A1 (fr) Procédé d'affichage d'identifiant d'application et appareil terminal
CN110908554B (zh) 长截图的方法及终端设备
CN110944236B (zh) 一种群组创建方法及电子设备
WO2020215969A1 (fr) Procédé de saisie de contenu et équipement terminal
WO2020215982A1 (fr) Procédé de gestion d'icône de bureau et dispositif terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866410

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19866410

Country of ref document: EP

Kind code of ref document: A1